Interactive systems are analytical applications that combine multiple reactive applications to produce a complex method to address a specific set of interactive use cases. These involve more than one specific set of learning tasks because the interactions they support are more diverse and require additional analytical scope. That said, these systems are still not reference so are still classified as “narrow”; however, Artificial Intelligence (AI) applications in this category tend to garner most of the attention in relation to high-profile AI deployments. The following outlines some specific examples that fall into this group.
Digital Assistants – Secretaries of the 21st
Century A popular set of interactive Artificial Intelligence (AI)-driven solutions can be classified as digital assistants. Some assistants are strictly text in nature (i.e. they focus on text-based interactions). Others engage primarily using spoken interactions. There are many niche players in this space that leverage a combination of capabilities, yet nearly all of them leverage some form of NLP capability (text and/or voice). There are four major players in this space that have created virtual personalities to personify their digital assistants.
These four companies are classified as interaction systems rather than Artificial Intelligence (AI)-driven applications because they involve multiple applications and technologies combined to enable a complex intelligent solution. The way these systems work is similar as they typically have at least three components.
1. Natural Language Processing
Whether text-related interactions (Facebook’s M) or primarily a verbal response-based system (Alexa, Cortana, Siri) the first component of a digital assistant is to interpret what you have communicated and respond in a way that makes sense. The ability to interpret this level of communication is one of the primary determinants of how well a digital assistant is perceived.
2. Backend Knowledge Base
There is usually some programming interface between the communication channel of the digital assistant (smartphone, IoT device, messenger app) and a staged set of predefined “answers” or instructions to drive subsequent actions. The NLP agent interacts by trying to interpret what has been requested and will look at the knowledge base (usually external to the NLP device) to determine the appropriate response.
3. External Application Integration
Here is where digital assistants interact with other applications (either through hardware integration (e.g., Siri/iPhone, Alexa/Amazon Echo) or by establishing permission (i.e., Google Assistant gaining access to your calendar). Using recommendations from the knowledge base, the agent will interact with an external application, if necessary, to fulfill the user’s request.
In these cases, the capabilities of all three can be improved over time as the ability to improve NLP capabilities and the decisions they drive will continue to use its interactive history. Just like a human, past experiences will help better align specific requests with the right actions.
Tracking Retail – Amazon Go
Just as Amazon does when you visit its website, the Amazon Go shopping experience is tracking all of your shopping behaviors. This will inform Amazon about how customers traverse the store, the exact placement of products that drive the most sales, and the purchase experience and actual results. Using analytics, Amazon will be able to create customized on-demand discounts related to your current or prior purchase behavior. Additionally, some aspects of this experience will allow users to experience products in person, but still buy online and have them delivered, or vice versa.
This melding of channels will eliminate one of the most complex aspects of understanding customer buying decisions across multiple channels.
One of the biggest aspects of this model, however, is the elimination of direct human involvement in completing transactions. By moving to a self-service model, all activities will be analytically driven over time. Customers enter the store, navigate based on their individual shopping objectives (ideally through Amazon-influenced product placement and strategic discounts/offers), and leave when their purchase is complete (with the transaction being completed automatically). The presumption is that the experience will not only be consistent, but improve for the consumer over time, and none of it will be influenced by the person.
Fast Food Industry
Andy Puzder, CEO of fast food chains Carl’s Jr. and Hardee’s, told Business Insider that, unlike human workers, robots are “always polite, they’re always deferential, they never take a vacation, they never show up late, never have a slip-and-fall, or an age, gender, or race discrimination case.” It says there’s a tremendous upside to automating the fast food industry.
At the 2016 NRA Show, robots were one of the hottest topics in restaurant technology. For example, Suzumo International has discovered a solution for bringing sushi to the masses quickly and efficiently. The latest “sushi robot” can make 4,000 pieces of sushi an hour, or a full roll of sushi every 12 seconds. It could help all-you-can-eat buffets, as well as sushi restaurants inside sports stadiums, schools, hospitals, and more, so that fewer people spend more time on the job. In order to produce large volumes of sushi in a fraction of the time, Cafe X has created an automated barista and a “coffee shop.”
CAFE X is 100 percent automated from the ordering and payment system to the preparation and delivery of the coffee. This system is much faster than any existing coffee shop experience. Once the refinement of the system is complete, the cost to operate this “coffee shop” is orders of magnitude lower than 2.5 baristas. Three lines can form simultaneously, and the system can deliver orders in record speed. Quality reviews suggest it meets or exceeds a typical Starbucks similar product. It is a 1-click buying experience that closely aligns the experience we see online or in person with Uber.
Self-Driving Vehicles
Whether augmenting humans with a nonhuman co-pilot, revolutionizing mobility services, or reducing the need for sprawling parking lots within cities, self-driving cars have the potential to do amazing things. Driving is complex and drivers need to anticipate many unforeseen circumstances. Snow, closed roads, and a child running across the street are examples of scenarios you can’t fully account for in coding rules. Therein lies the value of deep learning analytical algorithms; they can learn, adapt, and improve. No longer the thing of science fiction, fully autonomous cars are projected to be on the road by 2019 and could number 10 million by 2020.16 The potential savings to the freight transport industry is estimated to be $168 billion annually. Savings are expected to come from labor ($70 billion), fuel efficiency ($35 billion), productivity ($27 billion) and accidents ($36 billion), including any estimates from non-truck freight modes such as air and rail.
Read Also:
- Challenges And Future Of Adoption Of Artificial Intelligence (AI) In Educational Sectors
- Potential Of Artificial Intelligence (AI) In Healthcare
- From Admission To Discharge, How Artificial Intelligence (AI) Can Optimize Patient Care
- Advantages And Disadvantages Of The Use Of Artificial Intelligence (AI) In Management
- Artificial Intelligence (AI) Applications In Medicine
Leave a Reply