Everyone is talking about the positive impacts A.I. can have on your business.

A.I. is revolutionizing businesses, but what are the real risks and rewards of integrating A.I. into your operations? At Be Co, we regularly use A.I. to enhance our own productivity. When I introduced our marketing person to both ChatGPT and Midjourney, it helped an otherwise non-technical person write copy about technical topics and create images to support them. I regularly use it to troubleshoot technical problems in areas of system administration I’m not familiar with. It acts as an assistant, helping me find solutions faster by serving as a sounding board. Heck, I used A.I. to proofread this post! The possibilities of generative A.I. are literally infinite, but that’s actually what scares me.

What are the possible negative repercussions of A.I. for you and your business?

There have been countless sales pitches and presentations on how A.I. will benefit productivity, but my deeper question is whose productivity benefits here? Your productivity? Or the software company whose product has a half-baked implementation of A.I.? This is the core question I’m asking as companies shove generative A.I. into every product possible to attract investors and create marketing buzz. Is A.I. a boon or bust for you, the user?

Let’s look at the facts:

What happens to your data once it leaves your computer?

Generally, any input you provide to generative A.I. is also used to train the A.I. model. This means that inputting sensitive information could violate an NDA or, worse, the law. Ask yourself, does your company work with proprietary, sensitive, or regulated data? Might one of your employees possibly divulge trade secrets when interacting to generative A.I.?

You are giving your consent for Adobe to use your data for training when you use their products 🤷‍♂️

In a recent controversial example, Adobe introduced “AI Assistant” in the Acrobat app. At the top right of every new Acrobat application, a member of your organization has an opportunity to send a PDF document up to the cloud for analysis, and then subsequent summarization of said document.

Adobe has stated: “The features are governed by data security, privacy and AI ethics protocols and no customer content is used to train LLMs without customers’ consent.” But this is tricky because can you be sure you did not agree to allow Adobe to train AI? Was the agreement already in the terms and conditions you agreed to when you started using Adobe Acrobat? Could Adobe have made mistake in its implementation or data security? Is the data being sent to servers overseas in a county with different laws from our own? There are many unanswered questions in this highly opaque scenario.

After Adobe reassured customers it wouldn’t abuse your data, it later changed its terms and conditions using very broad language to state that it could use your data. I’ve personally found Photoshop uploading data to google.com when I open images in the application. Why does Google need data when I personally open up an image on my laptop?

To put it plainly, quoting a reddit user on the SysAdmin subreddit: “We are [a] HIPAA covered entity. A user inadvertently doing anything that sends documents to a provider not with a BAA, that is a huge no-no. 

There are many instances where this kind of data transmission can be against the law: financial organizations who are required to comply with GLBA, or a government contractor who deals with anything deemed classified.

Why is Adobe Photoshop trying to upload data to Google when I open a new document? 🤔

It might be okay if you were required to turn this feature on in order to use it, but Adobe’s A.I Assistant comes turned on by default, meaning unless you’re a nerd who obsessively reads the trades like myself, this could already be affecting you and your company. Additionally, turning it off was a nightmare for the organizations that I manage. The first time I did it, it took 90 minutes of my time, and Adobe’s support even hung up on me during one part of the process.

Hallucinations. 

Hallucinations are errors presented as facts by A.I, often undetectable by the A.I. itself. In 2023, a New York lawyer cited non-existent case law generated by an A.I tool. According to the BBC, “A New York lawyer is facing a court hearing of his own after his firm used [the] AI tool ChatGPT for legal research. A judge said the court was faced with an ‘unprecedented circumstance’ after a filing was found to reference example legal cases that did not exist.”

Tech bros whenever you question their morning routine of microdosing and insisting that AI will solve all the world’s problems.

According to Zach Warren of the Thomson Reuters Institute, “This is one of the biggest concerns surrounding the technology currently. Hallucinations are basically errors that pop up in the output of generative AI, presented as fact, and which it can’t recognize as wrong.” ChatGPT maybe the best compulsive liar that ever “lived.” I say this because if you’ve ever chatted with A.I., it speaks with both authority and confidence, even when it is confidently wrong about the topics it’s discussing, whereas a human might express doubt for a particular talking point or fact.

There is a legitimate reason hallucinations exist within A.I. According to Sam Altman, one of the co-founders of OpenAI, “If these models didn’t hallucinate at all, ever, they wouldn’t be so exciting. They wouldn’t do a lot of the things that they can do. But you only want them to do that when you want them to do that.” Essentially hallucinations can be described as an A.I. model’s ability to imagine something new, like a poem or fictional story. Imagination is a desirable attribute to an LLM. We just have to understand the risks hallucinations can pose to ourselves and our businesses, so that we also don’t end up in a New York court hearing with reputational damage.

Loss of privacy, and also contemplative autonomy.

Despite common misconception, A.I. and machine learning have been around long before ChatGPT's public release in 2022. For years, A.I. has been used to curate the content feed in your social media apps, or determine the selection of what you might like to watch on Netflix. In these situations, A.I. is used to understand what you might like the most, in order to keep you hooked on whatever platform you’re on.

A.I. functions best when it knows more about you. This enables it to better determine your preferences, introduce you to topics or new products you might like. This in turn makes more money for the company using A.I., or to keep you hooked on its content, and thus it’s advertisements. It can be a dangerous feedback loop that sends us down the rabbit hole.

Guillaume Chaslot, who has a PhD in artificial intelligence, and was hired by YouTube in 2011, helped create an algorithm which aimed to increase watch time on the platform, boosting time spent on the platform by 50% in just one year. “The idea was to maximize watch time at all cost. To just make it grow as big as possible,” Chaslot recalls of his time working at Google.

This resulted in various undesirable effects. YouTube’s “Up Next” feature, which utilized this algorithm, would lead viewers to increasingly extreme content the more and more they watched. This led to some individuals being radicalized through suggested content. As New York Times journalist, Zeynep Tufekci, wrote, “It seems as if you are never ‘hard core’ enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes.”

As A.I.’s recommendation algorithms get better at understand what keeps us hooked, it also continues to learn about our intent. Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, states “I don’t think a single person in the A.I. labs I’ve ever talked to thinks prompt crafting for most people is going to be a vital skill, because the A.I. will pick up on the intent of what you want much better.” As with any technology, this can either be great or disastrous. If A.I. is used benevolently, then its ability to pick up intent with increasing accuracy will be helpful. If the A.I. is being utilized with malicious intent, then we could find ourselves with information that influences our decisions in ways that are against our own, or our company’s self-interest.

In Europe, the EU has already passed regulation that addresses these issues called the “EU Artificial Intelligence Act.” The AI Act states that certain A.I. systems are prohibited, including, but not limited to A.I. which: 

  • “[Deploys] subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making, causing significant harm.”

  • Exploit vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm.”

  • Social scoring, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people.”

Some of these scenarios presented above remind me of the television series “Black Mirror.” I thought Black Mirror was fiction?? Unfortunately, in the US, we don’t have similar laws protecting users and companies from these abuses. It’s the wild west for A.I. We’re left with black box A.I. systems that forgo transparency and accountability.

Approach with caution.

I want to reiterate that A.I. already has a tremendous amount of positive impacts on society, but we need to be skeptical about those impacts. Just as with the nuclear age, humanity received both clean nuclear energy, as well as the nuclear bomb. The outcome from technology depends on the intention entity hosting the A.I.

Approach the use of generative A.I. with skepticism and critical thinking, just as you would with business negotiations or interactions with strangers. When you’re talking to A.I., you’re feeding into a digital collective consciousness, powered by massive super computers which consume the power needs of a small country. It’s not something to take lightly.

A simple click in an app to send your data to A.I. might seem like a small action, but just like everything on the internet: it’s all connected.

Takeaways.

  • A.I. can be a powerful productivity tool when used correctly.

    • Understanding what information is appropriate to submit to company approved A.I. will help reduce risk.

  • A.I. comes turned on by default in many products that don’t have a good track record for privacy or security.

    • Know when and where to turn off A.I. features to prevent your users from accidentally uploading sensitive or regulated data to the cloud.

  • Beware of A.I. hallucinations.

    • Generative A.I. has an imagination, but it cannot comprehend when it is imagining things, and when it is rendering knowledge based in fact. It is up to the user to be aware that A.I. cannot be trusted to be correct.

  • A.I. can be used for malicious purposes, just like social media has been in the past.

    • A.I. companies have shareholders, and shareholders want increased profits. Often times those profits will take precedent over safety and risk. We need to be aware as to not let our guard down when working with A.I. tools.

A.I. is a game changing tool. Let’s act like it.

I am the last person who will tell you to stop using A.I., but I just encourage readers to step back and reexamine their usage. Who does A.I. benefit, you or the company providing it? Sometimes A.I. can add extra steps to a task, as you need to babysit its often terrible output. Or sometimes, it can really deliver a piece of content or an idea that helps expedite your process! Understanding the risks and rewards of A.I. will help prepare you for work in the coming decade!

Contact us to learn more about training your team to reduce the risks of A.I.

We can help prepare your team to be mindful of the risks of using A.I. to set them up for success. You can set an appointment to talk about this topic by clicking here.

Randall Bellows III

Founder of Be Co - Technology Consultant, vCIO, Creative

https://beco.technology
Next
Next

Password Strength: How Fast Can Hackers Crack Your Password?