AI: More Hype Than Help (For Now.)

by | March 28, 2024

AI: it’s likely coming to your industry, for better or for worse. A recent study by Equinix revealed that 85% of IT leader respondents are already using or plan to leverage AI. However, 40% doubt that their infrastructure can handle it. While many companies are feeling the pressure to adopt AI, there’s a question brewing. As with any new technology: how much can it help, and how much is just hype?  

When adopting AI technology, IT leaders and stakeholders must keep in mind that their existing technology infrastructure underpins everything. Any additional technology will need to be integrated into that existing infrastructure, and use-cases are best pinpointed before investments are signed, sealed, and delivered.  

How well do you understand AI?

A study by The Verge polled 2,000 people and showed that 1 in 3 have used a group of popular AI tools, but “most aren’t familiar with the companies and startups that make them.” In the corporate world, companies are adopting AI across a variety of use cases, from CRM tools and customer service chatbots to cybersecurity and even general operations. But, per the aforementioned Equinix study, 40% of IT leaders say they are not comfortable with their IT team’s ability to accommodate the growing use of AI. 

It seems that, as with any new and shiny technology, people want to get their hands on it– but most don’t really understand how to apply it to their business. 

We’re here to help you gain that crucial understanding. Let’s explore some of the potential drawbacks and misconceptions of AI– so that you can make the best decision for your company going forward.  

AI is causing a scramble for power – but that may be temporary.  

In the late ‘90s and early aughts, the dotcom bubble enclosed the global economy. The future power of the internet was reflected in investments, and stock equity shot sky-high. Some great companies were born in these nascent stages of the web– but other companies proved to be sunk costs. That’s because when new and exciting technology emerges, the hype cycle colors our perception. People forget that they can’t predict the future.  

At that time in history, it was thought that the internet would need immense amounts of power to grow to its predicted scale and sustain worldwide adoption. It would be not only unsustainable, but expensive, and this put growing companies in a bind. As with many great times of struggle and fear, innovation was born: this growing concern led to the creation of hyperscale data centers that cushioned the blow.  

In reality, and due to those combined efforts, electricity use rose only minimally as internet use increased massively worldwide. “That’s an example of when the industry focuses on a problem, they go after it and figure out how to solve it,” said Dr. Jonathan Koomey, an expert on the energy and environmental effects of information technology, in an interview on The UPSTACK Podcast.  He predicts the same type of innovation is around the corner for AI.  

As it stands, power demand from generative AI will increase at an annual average of 70% through 2027, says Morgan Stanley. “[A]large portion of the incremental power needs for AI being sourced from zero or low-carbon technologies,” says Stephen Byrd, Morgan Stanley’s Global Head of Sustainability and Clean Tech Research, in the article.  

But, as it happens, AI may currently put a strain on utilities– whether they’re from green sources or not. And how much strain, we can’t be certain.  “Estimates do exist, but experts say those figures are partial and contingent,” says The Verge, “offering only a glimpse of AI’s total energy usage. This is because machine learning models are incredibly variable, able to be configured in ways that dramatically alter their power consumption.[…] The question of whether efficiency gains will offset rising demand and usage is impossible to answer.” 

We can safely assume this estimate is no small number. For example, NVIDIA alone is estimated to ship 1.5 million AI server units annually by 2027. “These 1.5 million servers, running at full capacity, would consume at least 85.4 terawatt-hours of electricity annually—more than what many small countries use in a year, according to the new assessment,” says Scientific American. 

Though innovation could be down the line, companies seeking to hit vital sustainability goals will want to be aware of the potential impact of AI, and may consider taking part in the new generation of innovative technologies being dreamed up by AI advocates.  

Eventually the widespread adoption may be a key driver of revenue and growth for sustainable energy stakeholders. But for now, it’s a tenuous and murky situation.  

UPSTACK recommends: Don’t pull the plug on AI altogether– but be vigilant  of the energy cost and plan ahead to stay on track with sustainability goals.

AI isn’t thinking – or learning.

It’s not just using AI tools that drains power– training AI, as it turns out, takes a lot of energy. AI tools, and all other machine learning implements, are “taught” through the process of training, wherein systems are fed information into an algorithm to gain a desired output. When the output is poor or problematic, the algorithm is tweaked in order to correct and improve results.  

This process takes time, and with larger or more complex tools comes longer training times– from hours for the most simplistic to weeks, months, or more for large-scale tools. Despite all of the time spent on training these models, they’re not perfect– not even close. If you’ve ever seen an AI tool’s rendering of human hands, or its answers to mathematical problems, you’ll know exactly what we mean.  

Here’s the key to understanding why these tools fail in certain arenas: they’re not really learning.  

An AI model’s success is predicated on its ability to churn out results to queries that are predicted or expected. For example, if you ask AI to show you a human hand, you expect it to have five fingers. In a well-trained AI model, you may get that– or six, or seven fingers. In a new or undertrained AI model, you may get a puppy’s paw.  

Because AI is trained to regurgitate or synthesize the information it’s fed during training, you can’t trust its accuracy. While this can be funny when the stakes are low, such as playing with an image creator for fun, problems arise when people lean too hard on AI for facts and research.  

For example, tools are known to fabricate references.  

“We’ve seen examples where somebody asks [an AI tool] a question and it creates articles that don’t exist, co-authors who never worked together,” Dr. Koomey shared with The UPSTACK Podcast. “They call this hallucination. Which I think may be giving it a little too much credit. It’s not actually thinking, it’s saying, ‘What is the most likely next word?’” 

UPSTACK recommends: Don’t take facts from AI at face value. Utilize these tools as aggregators of information or ideas– but be sure to verify their answers and citations.  

AI can transform your workplace– for better or for worse.

In cases like those shared by Dr. Koomey, AI may do more harm than good. But there are instances in which it has empowered greater productivity and innovation for enterprises.  

For example, AI has helped make programmers up to twice as efficient in creating code, and 83% of customer service workers say that AI will help them help more customers.  

“There are these examples where these tools have had a dramatic effect [, but the] question is whether you can apply these tools across the board, and I think there is a lot of hype here,” Dr. Koomey told Alex and Greg, The UPSTACK Podcast hosts. “There’s a lot of people jumping on board the bandwagon trying to apply these tools for different things. And that’s probably good in the beginning– you want people to experiment and find where the new tech can be useful– but there are going to be places where it’s not going to help, or it’s even going to be a problem.” 

AI did present some issues for CNET. The company recently doubled down on their use of AI to write nearly 80 articles. Despite issuing a multitude of corrections and facing criticism from the public, they say they will continue to use the tool.  

Even more poignantly, NVIDIA has created an AI tool that would allow healthcare companies to replace nurses with AI models, which cost just 1/10th of a real nurse’s salary to power. Over 40 U.S. based providers are reportedly testing the technology. It begs the question: are we far enough along in our experience with AI to trust its abilities in possible life or death situations? 

Another ethical issue arises in situations where AI does improve productivity: who gets paid for it?  

A Readwrite thinkpiece gives some insight into the issue: “Workers have reported an increase in the intensity of their work since the introduction of AI, despite the fact that it may increase productivity. Wages for workers who aren’t managers or AI specialists haven’t changed much either, suggesting that while productivity may increase, pay hasn’t.” 

“Many proponents of AI believe the problem is not AI itself, but the way it’s being consumed,” TechTarget states. “Proponents are hopeful that regulatory measures can address many of the risks associated with AI.”  

UPSTACK recommends: AI isn’t a guaranteed booster of efficiency or revenue. Carefully consider your use case, and the ethical implications that may arise, before believing in the promise of productivity. 

Legislature could restrict AI use.

Given the above reasons, it’s no surprise that people are beginning to raise concerns about the lack of regulation surrounding AI tools, especially in the healthcare setting. The noise is growing louder, and legislators are taking note.  

The EU just approved a landmark Artificial Intelligence Act , “the first comprehensive law on AI by a major regulator anywhere.” The act divides AI into three risk categories, and subjects tools to tiers of regulations based on their categorization. And, it has all of the trappings of a new global standard.  

It’s important to be aware that you may invest in tech that becomes heavily regulated or even banned if this trend of legislation takes root in the U.S. For companies with global offices, especially in the EU, this act could fracture the overall continuity in their global operations.  

UPSTACK recommends: Stay in the loop about laws and regulations globally and in your area before investing in AI. 

UPSTACK is about help, not hype.

Although AI may be overhyped, there are certainly use cases in which it can enrich your company’s overall productivity and efficiency. As experts in emerging technologies like AI, UPSTACK can help you seamlessly integrate the right AI solutions for your business, or help find alternative solutions to bolster productivity and innovation.  

Whatever you’re looking for, our experts are happy to guide you. Get in touch here to chat about your tech stack.  

To learn more about topics discussed in this article, listen to The UPSTACK Podcast: Powering AI Solutions featuring Dr. Jonathan Koomey, Koomey Analytics.