Welcome!

Ruby-On-Rails Authors: Liz McMillan, Pat Romanski, Elizabeth White, Hovhannes Avoyan, Yeshim Deniz

Related Topics: @ThingsExpo, Machine Learning , Artificial Intelligence

@ThingsExpo: Article

How Is Apple Using Machine Learning? | @ThingsExpo #AI #ML #DL #DX #IoT

Today, machine learning is found in almost every product and service by Apple

Today, machine learning is found in almost every product and service by Apple. They use deep learning to extend battery life between charges on their devices and detect fraud on the Apple store, recognize the locations and faces in your photos, and help Apple choose news stories for you.

The concept of AI (Artificial Intelligence) has been the subject of many discussions lately. According to some predictions, AI will have the ability to learn by itself, outclassing the capabilities of the human brain, and even manage to fight for equal rights by the year 2100. Even though these are (still) just speculations and predictions, companies like Apple are developing and implementing machine learning technology, which is still in its infancy. How is Apple using machine learning?

Apple's beginnings with deep learning technologies
Let's start with Apple's beginnings with using AI. It was during the 1990s, when the company was using certain machine learning techniques in its products with handwriting recognition. This machine learning techniques were, of course, much more primitive.

Today, machine learning is found in almost every product and service by Apple. They use deep learning to extend battery life between charges on their devices and detect fraud on the Apple store, recognize the locations and faces in your photos, and help Apple choose news stories for you. Machine learning determines whether the owners of Apple Watch cloud are really exercising or just perambulating. It figures out whether you'd be better off switching to the cell network due to a weak Wi-Fi signal.

Apple's smart assistant
In 2011, Apple integrated a smart assistant into its operating system, and was the first tech giant to pull it off. The name of that smart assistant is Siri, and it was an adaptation of a standalone app that Apple had purchased (along with the app's developing team). Siri had ‘exploded', with ecstatic initial reviews. However, over the next few years, users wanted to see Apple deal with Siri's shortcomings. Thus, Siri got a ‘brain transplant' in 2014.

Siri's voice recognition was moved to a neural-net based system. The system began leveraging machine learning techniques, including DNN (deep neural networks), long short-term memory units, convolutional neural networks, n-grams, and gate recurrent units. Siri was operational with deep learning, while it still looked the same.

Every iPhone user has come across Apple's AI, for example, when you swipe on your device screen to get a shortlist of all the apps that you're most likely to open next, or when it identifies a caller who's not memorized in your contact list. Whenever a map location pops out for the accommodation you've reserved, or when you get reminded of an appointment that you forgot to put into your calendar. Apple's neural-network trained system watches as you type, detecting items and key events like appointments, contacts, and flight information. The information is not collected by the company, but stays on your iPhone and in cloud-based storage backups - the information is filtered so it can't be inferred. All this is made possible by Apple's adoption of neural nets and deep learning.

During this year's WWDC, Apple presented how machine learning is used by a new Siri-powered watch face to customize its content in real-time, including news, traffic information, reminders, upcoming meetings, etc., when they are supposed to be most relevant.

Making mobile AI faster with new machine learning API
Apple wants to make the AI on your iPhone as powerful and fast as possible. A week ago, the company unveiled a new machine learning API, named Core ML. The most important benefit of Core ML will be faster responsiveness of the AI when executing on the Apple Watch, iPad, and iPhone. What would this cover? Well, everything from face recognition to text analysis, with an effect of a wide range of apps.

The essential machine learning tools that the new Core ML will support include neural networks (deep, convolutional, and recurrent), tree ensembles, and linear models. As for privacy, the data that's used for improving user experience won't leave the users' tablets and phones.

The announcement of making AI work better on mobile devices became an industry-wide trend, meaning that other companies might be trying that as well. As for Apple, it's clear that deep learning technology has changed their products. However, it's not clear whether it's changing the company itself. Apple carefully controls the user experience, with everything being precisely coded and pre-designed. However, engineers must take a step back (when using machine learning) and let the software discover solutions by itself. Will machine learning systems have a hand in product design, if Apple manages to adjust to the modern reality?

More Stories By Nate Vickery

Nate M. Vickery is a business consultant from Sydney, Australia. He has a degree in marketing and almost a decade of experience in company management through latest technology trends. Nate is also the editor-in-chief at bizzmarkblog.com.

IoT & Smart Cities Stories
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
The Founder of NostaLab and a member of the Google Health Advisory Board, John is a unique combination of strategic thinker, marketer and entrepreneur. His career was built on the "science of advertising" combining strategy, creativity and marketing for industry-leading results. Combined with his ability to communicate complicated scientific concepts in a way that consumers and scientists alike can appreciate, John is a sought-after speaker for conferences on the forefront of healthcare science,...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
DXWorldEXPO LLC announced today that Ed Featherston has been named the "Tech Chair" of "FinTechEXPO - New York Blockchain Event" of CloudEXPO's 10-Year Anniversary Event which will take place on November 12-13, 2018 in New York City. CloudEXPO | DXWorldEXPO New York will present keynotes, general sessions, and more than 20 blockchain sessions by leading FinTech experts.
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Ben Perlmutter, a Sales Engineer with IBM Cloudant, demonstrated techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user e...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science" is responsible for guiding the technology strategy within Hitachi Vantara for IoT and Analytics. Bill brings a balanced business-technology approach that focuses on business outcomes to drive data, analytics and technology decisions that underpin an organization's digital transformation strategy.