cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3829
Views
18
Helpful
11
Replies

Artificial Intelligence (AI) at the Edge

Alexander Stevenson
Cisco Employee
Cisco Employee

Artificial Intelligence (AI) at the Edge

by Alexander Stevenson, Cisco DevNet IoT

alexstev_6-1661267476279.png

Artificial Intelligence (AI) is often used as an umbrella term to describe intelligent agents that take inputs from their environment and attempt to achieve a cognitive outcome in the same way a human would. This requires a certain level of programming, training, or applied problem-solving to become practical. (0) AI has been around for decades, but recent increases in computational power have made any AI winter seem a distant past. Machine Learning (ML), a category of AI where computers are trained to make correct decisions based on data sets, is becoming extremely popular.

Edge computing is a form of distributed computing which brings computation and data storage closer to the sources of data. It is exemplified in the proliferation of smart devices but is an architecture rather than a specific technology. The Edge is where smaller, remote and wireless devices extend our networks further than ever before. 

Can we make our dream come true and bring ML onto the Edge? Until recently, the answer has been primarily: no. But we are seeing more and more headways in this direction. First, we must understand that the ML training (deep learning) is not best suited to be executed on the Edge. Why? Because bandwidth, computing and power are precious at the Edge. When ML has already been trained on larger, more powerful computers offsite at places like data centers or even the cloud, we can still take advantage of Inference at the Edge.

With Inference at the Edge, Trained ML is placed on Edge devices and is fed new data with the whole operation being optimized for performance. This, we can afford at the Edge, and it’s definitely worth it; check out these technical benefits of running Inference at the Edge:

  • Reduced network bandwidth
  • Real-time predictions

 

The use cases for running Inference at the Edge span across all industries:

  • Smart cities
  • video surveillance
  • predictive maintenance in factories
  • vehicle collision avoidance
  • voice/sound recognition
  • image recognition

 

As stated previously, the three major constraints at the edge are bandwidth, computing and power, but with AI/ML we also add these challenges:

 

ML/AI Frameworks

Developers often use a wide range of frameworks and tools. Providing support for popular frameworks is a must (PyTorch, Tensorflow, etc.)

 

Management
Providing a standard way to schedule, deploy, and update applications is an absolute must. (1)

 

Regardless of the challenges, engineers, computer scientist and investors are confident of a bright future for AI at the Edge, as evidenced by the hundreds of millions of dollars being invested in Edge AI-focused chip manufacturing and Edge computing platforms. (2)

 

Cisco Leads the Way in Edge AI

 

Cisco Webex is leading the way in AI, particularly in Natural Language Processing (NLP). NLP is a crucial step in AI evolution, comprising Phase 2 of the Five Phases of Assisted AI. (0)

 

The Five Phases of Assisted AI

 

alexstev_0-1661266291427.png

 

Webex uses NLP as part two of a three-part process to achieve cutting-edge speech recognition, which is common to all voice-based user interfaces.

alexstev_1-1661266291432.png

Source: NVIDIA

 

But NLP wasn’t Cisco or Webex’s first foray into AI. Back in 2017, Webex introduced ML-based noise detection. With this, Webex uses AI to recognize loud and annoying noises in the background and suppress or filter out such noises (typing on a keyboard, rustling papers, etc.). In 2020, Webex introduced new noise removal technology, powered by AI, which goes beyond noise suppression by 1) distinguishing speech from background noise, 2) removing background noise in real-time, and 3) enhancing your voice to elevate communication, independent of language. (0)

 

NLP and noise detection/removal occur largely at the Edge, where the speech data is collected. Because of the fast-paced nature of speech, processing for these tasks should occur as close to the end device as possible, if not on the device itself, then one device upstream, which would be on the central hub or gateway for the IoT devices in the vicinity.

 

Cisco also has many solutions incorporating AI, which I haven’t mentioned, and are beyond the scope of this article, including Cisco Stealthwatch and Cisco DNA Center's AI-driven insights, to name a few.

 

For more information on how to securely extract, transform, govern and deliver data from IoT edge devices to applications programmatically via a solution which simplifies edge to multi-cloud data flows, see Cisco Edge Intelligence

 

 

Cisco and Google Collaborate for Open Hybrid Cloud AI/ML

 

Cisco and Google Cloud have also collaborated to create an open hybrid cloud architecture to help customers maximize their investments across cloud and on-premises environments. In 2018, Cisco announced the Unified Computing System (UCS) and HyperFlex platforms will leverage Kubeflow, providing production-grade on-premise infrastructure for running AI/ML jobs. (5)

 

“This joint Cisco and Google solution brings Google’s leading machine learning capabilities and frameworks to the enterprise—providing businesses the ability to easily and quickly deploy AI or ML workloads in their own data center and reduce the time to insights.”

— Kaustubh Das, Vice President of the Computing Systems Product Group at Cisco

 

 

Explore AI & ML Use Cases for Line of Business Apps

 

alexstev_2-1661266291440.png

https://developer.cisco.com/ai/

 

 

New to Machine Learning?

 

Explore these two self-paced developer tutorials using Cisco's latest technologies in the DevNet Learning Labs.

 

1. Explore Generative AI Capabilities

AI assistants and large language models (LLMs) can help network engineers, and infrastructure developers to accelerate their work, perform troubleshooting, handle errors, validate config file, and so forth.

This tutorial describes how you can use LLMs in your work.

 

2. Mixtral 8x7B and Llama 3 GenAI models with APIs 

This Learning Lab introduces you to leading open-source and open-access learning models, such as Mixtral 8x7B and Llama 3.

It also includes predefined commands and instructions, to enable you to learn how to solve developer-related tasks, including working with Cisco Security APIs.

When you complete this Learning Lab, you will be able to:

  • Create API calls and process the model completion
  • Avoid model knowledge cutoff
  • Using LLMs with Cisco Security APIs

 

 

Cisco Meraki and Edge AI Innovation

 

Another sector, where Cisco is making headway for AI at the Edge, is Meraki. Already, the Meraki MV Sense custom CV allows partners and customers to build their own (ML) model to run directly on Meraki cameras. And this is just the beginning.

 

“At Meraki, we have an effective recipe to execute AI at the edge. The combination of a cloud-managed solution with edge-based processing is making operations for our partners and customers easier as they enjoy the benefits of advanced computer vision technologies.” (3)

 

 

Learn more about Edge AI from Cogniac + Meraki!

 

Using AI to train AI  

Learn how our ecosystem partner Cogniac is leveraging AI to train AI, to build an application with a small dataset. Their innovative, low-code visual intelligence platform enables users to build their own AI models quickly and confidently. Presented by Y. Ming Tsang, Director of AI, Cogniac

Watch the video here

 

Data set and model training

Learn how to collect selected datasets for efficient AI model training. Presented by David Gaumonth, Engineering Manager at Cisco Meraki

Watch the video here

 

ML Operations

Learn about machine learning operations in the MV smart camera ecosystem. Presented by Amit Kumar Saha, Principal Engineer, Cisco Meraki

Watch the video here

 

Deploying computer vision at the edge  

Learn more about deploying MV Sense custom computer vision models at the edge with MV smart cameras. Presented by Elvira Dzhuraeva, TME Camera Intelligence, Cisco Meraki

Watch the video here

 

Responsible AI/ML

Learn more about Cisco's policies with respect to ethical and responsible AI, and how Meraki MV smart cameras balance providing insight and analytics without compromising privacy or security. Presented by Alissa Copper, VP/CTO, Technology Policy and Cisco Fellow

Watch the video here

 

And finally, learn more about Building Scalable Computer Vision via our blog on Edge AI here. (4)

 

 

There you have it, AI at the Edge. I may not have kept you on the edge of your seat, but if you’ve read this far, I’d like to welcome you to the state, and the future, of Artificial Intelligence at the Edge.

 

 

Sources:

 

0. Collaboration with the X Factor: How AI Is Transforming the Way We Work, by Keith Griffin, CTO of Intelligence and Analytics, Cisco Collaboration Group

https://www.webex.com/content/dam/webex/eopi/Americas/USA/en_us/documents/pdf/How%20AI%20is%20transforming%20the%20way%20we%20work%20Webex.pdf?elq=2144cfe80199448b8ce09fcdd19fd08b&elqCampaignId=&elqTrackId=e794091708754eb2b3ef65368d62af65&elqaid=1003...

 

1. Cisco Blog: Artificial Intelligence and Machine Learning at the Edge, by Ashutosh Malegaonkar

https://blogs.cisco.com/developer/ai-ml-at-the-edge

 

2. Trends in IoT: Part 2 - Edge Computing, from the Cisco Investments Team

https://www.ciscoinvestments.com/trends-in-iot-part-2 

 

3. Cisco Meraki Blog: Edge AI: Building Scalable Computer Vision, by Andreas Nordgren

https://meraki.cisco.com/blog/2022/04/edge-ai-building-scalable-computer-vision/

 

4. Meraki Community: Learn more about Edge AI from Cogniac + Meraki!, by Ana Nennig

https://community.meraki.com/t5/Marketplace-Announcements/Learn-more-about-Edge-AI-from-Cogniac-Meraki/ba-p/155903

 

5. Cisco Blog: Cisco UCS and HyperFlex for AI/ML Workloads in the Data Center, by Kaustubh Das

https://blogs.cisco.com/datacenter/cisco-ucs-and-hyperflex-for-ai-ml-workloads-in-the-data-center

11 Replies 11

1rwycswh
Level 1
Level 1

Artificial intelligence is used in peripheral computing environments as edge AI. This means that rather than in a centralized cloud computing facility or offsite data center, AI computations are carried out at the edge of a particular network, typically on the device where the data is generated.

jhondrake3532
Level 1
Level 1

It's fascinating to see the advancements in AI and its integration with edge computing. ML training may not be ideal for execution at the edge due to constraints, but inference at the edge offers benefits like reduced network bandwidth and real-time predictions. The use cases for running inference at the edge are vast, spanning various industries. Cisco is a leader in edge AI Chat, with solutions like Webex leveraging NLP and noise detection/removal. The collaboration between Cisco and Google Cloud further expands the possibilities of AI/ML in hybrid cloud architectures. Meraki is also making headway with AI at the edge, especially in computer vision. Exciting times ahead for AI at the edge!

oliviamark2769
Level 1
Level 1

Edge AI, or AI at the Edge, refers to the practice of executing artificial intelligence and machine learning models on local devices such as smartphones, sensors, or cameras rather than relying on cloud computing. By processing data near its source, this decentralised method allows for quicker, real-time decision-making and analysis, thereby minimising latency and reliance on network connections. It provides improved privacy and security by retaining sensitive data locally.

Martin L
VIP
VIP

thanks for sharing !

oliviamark2769
Level 1
Level 1

the practice of implementing AI algorithms directly on local devices like smartphones, sensors, and IoT devices. With this approach, data can be processed and decisions can be made in real time at the source of the data, without requiring a constant connection to the cloud. The main advantages comprise quicker response times, enhanced security and privacy, and offline functionality.

oliviamark2769
Level 1
Level 1

Local processing: Rather than transmitting data to a central server, AI models operate directly on the local "edge" device.
Accelerated decisions: As data does not have to be sent to and retrieved from the cloud, analysis and response can occur within milliseconds—this is essential for applications that are sensitive to time.
Offline functionality: Essential systems can keep running despite a loss of network connectivity.
Improved privacy: Sensitive data can be handled locally, requiring only the transmission of anonymized results or insights to the cloud. This enhances security and aids in compliance.

oliviamark2769
Level 1
Level 1

Autonomous vehicles: A self-driving car employs artificial intelligence to process sensor data instantaneously, identify obstacles, and determine driving actions.
Healthcare: Smartwatches are capable of local analysis of heart rate and other vital signs, whereas medical devices can leverage edge AI to expedite the analysis of imaging scans without the need for transferring large files via the network.
Smart manufacturing: Edge AI can facilitate predictive maintenance and avert unanticipated downtime by locally processing sensor data to monitor equipment for possible failures.
Smart cities: By integrating AI into traffic lights and surveillance cameras, local analysis of traffic and security data can take place, leading to enhanced traffic flow and public safety.
Smart home devices: Voice assistants handle commands directly on the device, while security cameras identify intruders in their immediate vicinity.

oliviamark2769
Level 1
Level 1

denotes the practice of executing AI algorithms and machine learning models directly on local devices (like sensors, smartphones, or industrial machinery) instead of depending on centralized cloud servers or data centers. This method allows for instantaneous decisions to be made, improved privacy, and functioning without the need for a constant internet connection.

oliviamark2769
Level 1
Level 1

The Functioning of Edge AI
Usually, the process consists of two primary phases:
Training: Due to their high computational requirements, AI models are typically trained on large datasets within powerful centralized cloud environments.
Inference: After training, the optimized and frequently "lightweight" AI models are put into operation on edge devices that have limited resources. The model is used by these devices to conduct local and real-time data analyses and make independent decisions.

oliviamark2769
Level 1
Level 1

Low Latency: For time-sensitive applications such as autonomous vehicles or industrial robotics, it is essential to process data locally in order to eliminate the delay (latency) caused by transmitting data to the cloud and awaiting a response.
Improved Privacy and Security: By processing and storing sensitive data on the device, its exposure to network vulnerabilities is reduced, aiding organizations in adhering to data privacy regulations (such as GDPR, HIPAA).

oliviamark2769
Level 1
Level 1

Reduced Bandwidth Utilization and Expenses: Only pertinent insights or summaries are transmitted to the cloud (when necessary), which significantly diminishes the volume of data transferred across networks and cuts down operational expenses.
Offline Functionality and Reliability: Edge AI systems can work independently in settings with minimal or no internet access, guaranteeing ongoing operation and robustness of the system.
Energy Efficiency: Local data processing usually consumes less energy than the ongoing transmission of data to remote servers. This enhances the battery longevity of IoT and mobile devices and aids sustainability efforts GTA San Andreas for IOS.