cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1096
Views
23
Helpful
12
Replies

Ask Me Anything Event: Global AI Architect and Futurist | SXSW Speaker

Brooke Hammer
Community Manager
Community Manager

Welcome to the Cisco Community Ask Me Anything conversation!  

Submit your questions from Wednesday, April 9 to Wednesday, April 23 to our guest, Annie Hardy, Global AI Architect and Futurist at Cisco. She helps Cisco partners plan for the emerging requirements that the AI Era brings, balancing the desires to evolve their workforce, be profitable, and also innovate responsibly. She has recently spoken at SXSW Austin! 

_____________________________________________________________________________________

Annie AMA.png 

Current team: Cisco’s Global Partner Engineering team | LinkedIn 

Years in the tech industry: 20 

Education & certifications: Austin College B. A., certification in Strategic Foresight by the University of Houston College of Technology  

Key areas of expertise: AI, Spatial Computing, Human-Machine interaction, Strategic Foresight, Thought Leadership  

Hometown & where you live now: Austin, Texas area  

One thing you're currently obsessed with: Finding the perfect white button-down shirt  

_____________________________________________________________________________________

 

Lets get the conversation started with Annie... 

 

 1. What inspired you to pursue a career in tech, and what keeps you excited about the work you do today? 

I actually didn’t intend to go into technology. I didn’t think I was that smart until I was in my thirties – because I wasn’t super good at math. I struggled. I liked chemistry but ended up studying communications in college after unexpectedly falling in love with public speaking. And then after college I became a social worker, wanting to save the world. But as much as I wanted to help people, I was always more curious about how my organization ran, how we could improve – I was always able to so easily explain complex topics to others and understand deeply technical topics. I learned more and more, moved into a role in tech marketing, got more and more responsibility, and then finally started my own company to do what I wanted to do, with technology companies as my clients. So, I adopted some technology products as I marketed and researched them, but entrepreneurship is what flung me more deeply into the field. No one told me I belonged in technology - I had to decide that for myself.  

 

 2. As an AI Architect & Futurist, how do you see AI transforming the way businesses operate in the next five years? 

AI has been around for a while, but we've just begun entering the Generative AI age. GenAI is a flavor of AI that uses loads of data to interact with the world in a more natural way, creating text and images that are hard to distinguish from human works. Today we're mostly dealing with small AI - projects and pilots that are changing the way people access information. But within 5 years, we'll have big, quiet AI doing the work behind the scenes in massive ways. Some of it will be adopted without anyone even noticing. We'll have more code written by GenAI. We'll have more reports written by GenAI. We'll have reticent adopters finally using the tools because companies silently weave new GenAI features inside software products we already love. We'll also have to adapt to zero trust and post-quantum security strategies because basically quantum computing is going to break today's encryptions, so that's going to be a fun little organizational shuffle over the next 5 years.

I dive a bit deeper on the Future of the Internet in this recent Seeking Delphi podcast episode, Artificial Intelligence and the Future of the Internet with Annie Hardy.

 

3. AI safety and governance are hot topics. How is your work with the NIST US AI Safety Institute Consortium shaping the future of responsible AI? 

Interesting question - we’re currently in a position where the focus of the American government is shifting from controlling AI to accelerating it. The shift has enabled us to continue to speak about the things that matter to us, but in context of values like efficiency, security, preservation of capital, and resiliency. I’m proud to be part of a team that aspires to leverage our voices and intellect to advocate for an AI Age that truly is an inclusive future for all. 

 

 4. For women aspiring to leadership roles in tech or young women considering a STEM career, what’s the one skill they should be developing today? 

Confidence. Get help to work through the lies of anyone who told you that you can’t. Get help to work through the steps of being able to understand your super-powers. Join a group to practice your public speaking skills until they’re sharp. Never trust a company with your career path – find your footing, find your confidence, and make your own path.  

 

5. Women supporting women is huge—can you share a moment when another woman at Cisco or in tech lifted you up or inspired you? 

Charlotte Rose published the first report I ever wrote at Cisco. I’d written this massive paper called “The Risk of Exclusion in the Metaverse” and I was struggling to get it published because no one quite understood what I was trying to do. But she has been such an amazing light and support for me, trusting my intellect and capability and putting me in places where amazing things have happened. She’s a wonderful person and has been an immensely valuable ally in my career here at Cisco. Love you, Char. 

 

 6. You’re currently obsessed with finding the perfect white button-down shirt—have you found it yet, and what makes a great one? 

There are so many directions you can go, from full tunic to finding the perfect tailored top with darts to be tucked in. I’m actually playing with the half-tucked look recently and I’m not sure it’s quite “me.” But I like something simple but feminine, and I lean towards unstructured over crisp cotton. I’ve tried wrinkle free, but I feel that many times they lack the attention to detail of one I would have to iron. That said, I do already have three different white/ivory button-down shirts I wear to different occasions, and I have to steam iron all before use. Maybe my glass-slipper-of-a-button-down is asymmetrical, sitting in a boutique in Houston right this very moment. I really don’t know. Clearly, I’m still looking for the perfect one.  

 

 7. What's the most "nerdy" or unexpected thing you absolutely love?  

Personal finance. I am an absolute finance geek and I started a group at Cisco for women to talk about money, called 'Chickonomics'. It’s not my day job, but we now have like 4,400 people in Cisco chatting about money. I love it.  

 

 8. If you could instantly download any skill into your brain (like in The Matrix), what would it be? 

How to survive in a world without technology. I’d love to have highly marketable skills if we ever find ourselves in a post-apocalyptic economy. For a bit more on that, you can read this interview of me by IPSOS, How we can build needed trust in AI through equity. 

 

 9. Since you’re a Futurist, where can all of us non-futurists go to learn more about the Future of the Internet?  

Take a Friday night and watch three movies to experience the caricatures of my alternative futures projections: Minority Report, Wall-E, and Idiocracy.
Otherwise, I recommend following Singularity University's newsletter, MIT Tech Review, The Future Party, Politico's Digital Future Daily, IPSOS' What the Future, and The Non-Obvious Newsletter, plus listening to the a16z and Exponential View podcasts.

 

10. Any other special message?  

Using AI will take lots of energy, and that energy consumption right now is driving climate change. Use AI intelligently – learn how to prompt intelligently and get the right responses the first time. Use a smaller model instead of a big one if it suits your needs. We can all do our part to protect the environmental impact of AI.  

 

Take the opportunity to interact with Annie!

 

Note: Please post your question or comment no later than April 23, 2025.

Post your question/comment below by clicking "Reply" 

(Answers will be processed depending on the availability of the expert)
Don't forget to thank the expert by giving it a helpful vote!

 

 

In case you missed it: Women in Tech group hub on Cisco Community is a dedicated space for women in networking, cybersecurity, AI, IT, and STEM to support each other and help close the gender gap in tech. Connect. Share. Celebrate. Advocate.

12 Replies 12

Bonnie
Level 1
Level 1

Is there a way to do an FTD / FMC assessment like the way Cisco CLI  analyzer does with ASAs?

 

 

Great question ) I had to dig around a bit for this to confirm my suspicion. CLI Analyzer was helpful in visualizing information captured from ASA, like top talkers, packet tracer, etc. - but all of this is inherently available in FMC. Also, with the new version of FMC you can actually engage with our AI Assistant, so play around with that a bit. 

Vivien Chia
Cisco Employee
Cisco Employee

LOVE your work @annhardy! I admire your deep expertise in AI and your special message (Q9) on doing our part to use AI intelligently for the environment. Do you have more tips around that? What do you mean by using the 'smaller model'? Can you please expand on that from the user's perspective as we see AI integration everywhere, from the tools we use at work to phone apps... It was fascinating to see that in a religion app too!

Thanks for the question, Vivien. Generative AI models are trained on a certain set of knowledge. Under the hood of ChapGPT is a set of data that's massive - although they haven't published the exact amount, the most recent model is said to have been trained on over a trillion pieces of information. Consider this like it was taught all of the information in the Library of Congress, and it had to read the books multiple times. If AI was a college student, that would require tons of RedBull. With these models, the "bigger" the model, the more "parameters" it uses, the more data was used to train it. And the amount of RedBull required also refers to the amount of energy required to repeatedly read all of that information until it learns it all. And every time you ask it a question, it has to refer to loads of data - increasing the compute power necessary to process your inquiry (prompt) before it delivers it's result (output).  

Now, consider that another student had to read everything in a local library and learn it. Smaller amount of information, less RedBull. This is a smaller model with fewer parameters, that uses less energy. Sometimes, the result might be less accurate depending on our use case - large models are more accurate across the board at multiple tasks and sophisticated use cases. 

But sometimes, like with the 1B parameter Apple Foundation Model running on the new iPhone, the size of the data used for training is *just right* for the size of the application, Apple Intelligence. The Apple Foundation Model doesn't do everything. From that team's Arxiv paper, the model was "designed to perform a wide range of tasks efficiently, accurately, and responsibly." including writing and refining text, prioritizing and summarizing notifications, creating playful images for conversations with family and friends, and taking in-app actions to simplify interactions across apps." It's a bespoke model designed for a specific purposes, which is like having to read and re-read just the Geography section of the library, and reference that when asked a question. Smaller amount of training data, specific application, less Red Bull. 

But some models are small and still perform well - challenging the assumption that you need an entire Library of Congress to get the best Generative AI model. 

I personally believe that the more efficient use of Generative AI models in the future is to have a set of smaller models for specific purposes, and have an agent decide which model to use based on the user's prompt. This is like having a college professor know which student is writing their thesis on Geopolitics, which on European history, and which on the Space Program. Different areas of interest from a prompt would enable the college professor (agent) to determine which student (model) is the best one to tap for the answer (inferencing -> model output). 

The most sustainable future is one where we find ways to intelligently choose a Generative AI model with the lowest carbon footprint - like we might do with a refrigerator. If our fridge was also a dishwasher and oven, it would use a lot of energy. Instead, we should have multiple smaller models, as we have in our kitchen.

Hope that helps - thanks for asking! 

Thanks for the response @annhardy - the Red Bull analogy is super helpful!

Hi @annhardy 

thank you for your time and for sharing your experience with us !!!

Three questions:

1.

Do you think the cost of AI and Quantum Computing will be affordable for any company in the next 5 years ?

2.

AI in business has not reached its full potential, and AI companies are still developing solutions that meet the specific needs of each business. As a result, it is still a challenge for some companies to find valid AI use cases that they can benefit from. What advice do you have for companies that are not yet AI-mature?

3.

Quoting Alvin Toffler: "The Right Question is usually more important than the Right Answer to the Wrong Question.", in the next few years, it is possible that we (humans) will not be able to "prompt intelligently" (too many "Wrong Questions"). Do you think AI can evolve to the point where it can understand our "human language" and respond usefully the first time or "prompt intelligently" will be a valuable skill in the future ?

 

Great questions, Marcelo!

1. I believe that Quantum computing power will be available to companies in the next 5 years, but not in the form factor you might imagine. I believe that the Quantum Chips we have today are going to be able to power Quantum Processing Units that we can add to the network architecture for specific use cases. But currently, you can already tap into Quantum computing power through Quantum clouds that exist within hyperscalers and companies like D-wave. "Affordable" is a tough term to wrangle because it's so subjective - but I believe that beyond cloud, companies will be able to purchase modest QPUs within 5 years. 

2. This is where expertise is really critical - but expertise is quite hard to find and very expensive. I think that companies who want to adopt AI need to build cross-organizational tiger teams internally who represent the business interests and who are excited about AI. They develop a perspective on the opportunity, do the research, put together ideas. Every company has people inside it who have good ideas. For companies to succeed with AI, they have to understand how it can accelerate business outcomes, and that starts with business needs. The COE can then pull in agencies that help them mature those ideas and build POCs - but it starts with finding the excited, capable AI newbies *internally* that can drive ideas. This isn't necessarily what most organizations are talking about or doing - I think lots of companies are just trusting consultancies to do it. But they're missing the "people" part of "people - process - technology" - start with the internal ideas, upskill your internal teams, and think of agencies as partners, not the solution. 

3. I actually already advise colleagues customers, and partners alike to prompt more intelligently, and ask AI to help them develop prompts. Prompt Engineering is one of the key skills that will be required of every role in the future and, incidentally, will reduce the energy requirements for model inferencing. It's a critical component in making AI real, and driving higher ROI. But yes - just like autocorrect automatically corrects our misspelled words, autofill accelerates the emails we write, and Google corrects our searches based on historic trends, so will AI critically press into whether we're going about our prompting in a way that drives the outcome we seek. This is pretty easy to do today behind the curtains when building an assistant - you can augment a prompt with a specific request to help the user hone the prompt to get the best result, making recommendations or asking leading questions. So yes, it will be able to help - but it won't be able to read our minds to figure out the exact outcome we seek (for a while yet). 

Thanks for asking! 

Gus L. D.
Level 1
Level 1

I love this!!  All questions by @Vivien Chia  were wisely answered by @annhardy    Thank you both!!    

Congratulations Ann for being a true inspiration!!   Keep up with your personal and professional goals.  You can do it!!   

Also, @Brooke Hammer. Thank you for sharing this post with all of us. 

Cheers,

Gus

jucanepa
Cisco Employee
Cisco Employee

Love your work Annie Hardy!  Thank you.

Thank you for the amazing links to resources! I think it will take all of my free time for a while...

What are your thoughts on when Artificial General Intelligence will finally break through? And, as a futurist, what do you see for our society when that finally does happen?

Maren

Maren - so as far as breakthroughs, there are three massive ones: (1) Artificial General Intelligence, (2) Superintelligence...and (3) The Singularity. 

(1) Artificial General Intelligence (AGI) is a theoretical state where AI performs as well as a human - but before we get there, AI has to outperform humans in specific tasks, a state known as Narrow AI or Weak AI. In fact, we are currently seeing AI perform as well as humans in a certain section of tasks. For instance, in news this week. Open AI's GPT-4.5 model was the first model to successfully pass an original configuration of the Turing test, convincing people 73% of the time that it was human. AI has achieved Narrow AI mastery in certain areas, but it will need to become far more capable across areas to be able to achieve AGI, more adequately performing like a human would in all the places a human would perform. Some AI SMEs have said they expect us to reach AGI by 2030, but in another study, 50% of AI Researchers believe we'll have AGI between 2040 and 2061. I think we will have early AGI within a decade. 

(2) Superintelligence is when AI performs all tasks at or better than a human. This is when people start to get alarmed. I don't have a timeline on this one but it's largely assumed that it will be just decades from when AGI is established. 

(3) The third is the Singularity. This is when AI not only outperforms us, but it begins to self-improve and continue to grow. I don't have a timeline here - again, it builds off of the previous phases so it could be <100 years. 

Getting to this point requires as strong ability to control AGI, first - which we don't currently have today. If you're interested in - or concerned about - AGI, Superintelligence, or the Technological Singularity, you should also be paying attention to AI safety, control, and guardrails. Related, check out what Google DeepMind's CEO has to say about the need for more global oversight of AGI development

Kathy N.
VIP
VIP

Thank you for your post and for the great answers @annhardy   I appreciate how you related it to traditional tools and resources to help me get a better overall understanding. 

My question for you is much simpler than most because I assist quite a few people that are "scared" of AI because they only visualize it in terms similar to WarGames or Terminator.  What suggestions do you have on how to respond to questions from non-tech people about what it is as well as reassure those who only think of it in terms of the harm it can do?  I feel like I'm constantly explaining that it's not going to take over the world. 

Is it?     



Response Signature