AI: Well, that didn't take long...

Share
Good day my security super soldiers...while I am finishing off part 2 of 4 of the massive Okta issues, my squirrel chasing brain tripped over this little article:
Chinese scientists’ attack on ChatGPT shows how criminals can use weak spots to exploit AI
Well of course AI will be poisoned...as was blockchain, as was yahoo (and then google), as was the internet itself!
While reading this I was reminded of a story from the young internet days. Are you ready? Here we go! STORY TIME!
A long, long time ago, in a galaxy...well, exactly like this one, a young internet was born. I was lucky enough to have had access before 1991. Before 1991 the Internet was a glorious place. People freely shared information and people took care about what they posted and all in all, you could trust all the data you read...peoples reputations were at stake. Articles were peer reviewed and it was a great time for information sharing...
Around 1991 the old ArpaNet was opened up to the public. I was a young college student and I honestly thought this would be the beginning of a new enlightened age for the world! My sweet summer child mind dreamt of how the internet would revolutionize a new dawn for humans. We would accelerate our research using this new tool. We would take down cancer due to the new power of the internet. We would evolve as a society and the world would become more harmonious!
Oh man was I wrong. It took about 3-4 years but by the time the first World Wide Web Consortium (you youngin's might know this as the W3C) agreed http 1.0 (RIP Tim Berners Lee, you will be missed!) , there were signs that my internet fueled utopia was looking more like the second level of hell, quickly closing in on the third...
Long story short, seeing my perfect internet now being abused by commercial forces was not the worst part...the worst part was the dumbing down on information. The worst part was that I went from being able to trust everything posted on the internet to having to verify everything! Exposing the internet to the public was my first lesson on how humans work...some take pride in the veracity of their data (Doc Holligray at your service!)...and some just want to watch the world burn.
In the early search days, there were search engines called Yahoo, Altavista, and Hotbot (extra points if you can remind me of the others!, send mail to: DuhYouForgot@prozesstec.com ) ...and all of them could be manipulated to show certain pages before other pages...we call this manipulation, SEO...Search Engine Optimization. I call it poisoning. The way the system was designed was that it should show results based on keywords that your site naturally use so that people can find the data they need quickly...but these search engines were poisoned and searches were now at the mercy of early SEO/poison gurus...
Which circles back to our blog for today...Chinese based researchers have PoC'd (Proof of Concept) the poisoning of popular AI tools. When you use the tools, just like when you use search, how you interact with the tool will slowly change the tool...it's a learning mechanism meant to keep the requests relevant.
AI is smarter than directory searches, but it still can be fooled...the high level overview of how this was accomplished is that the requests were fed images of slightly modified images until the model misidentified a picture of a woman with a picture of a panda.
While this can be seen as an act by kids done for the lulz, it highlights a big issue with AI tools...its the same issue that searches have...the same issue that blockchain has...the same every single internet based tools has...
...that common issue are humans. We are the poison! Well, how do we deal with the inevitable? How to we protect ourselves from the onslaught of bad info? How do we make sense of the data?
The biggest tool we have is our own ability to master our knowledge. Double check everything! That’s the biggest step...when you read something new, make sure its validated someplace else. This is a version of peer review...and its powerful. The more people who agree on the peer review, the more you can have faith that the data is correct. If you and your friends think one thing, but a million other people think another...maybe you are your friends are wrong!
If one chat is telling you that the Himalayas are made of cheese, verify it with the other chats...please. Please verify. No matter how much you want the believe that the Himalayas are made of cheese, please verify this info...how do we verify?
Well Ole Doc Holligray has a sure fire way to test out any new paradigm...we test against it. I test to DISPROVE my thought. I try and disprove the new bit of information and the more it survives the constant onslaught of attack trying to disprove it, the more "true" it will be.
Lastly, be open to change your mind. I once thought the internet would be the dawn of an age of enlightenment...and while it still might be one day. I had not anticipated the journey through the nine rings of hell to get there.
Follow the data...all the data. If your truth is predicated by you ignoring a large dataset, then maybe you should review and understand that large dataset before making up your mind.
The big fear for me here is not that China just PoC'd the poisoning of AI...the big fear for me is that we as a civilization will fall into the same pitfalls we have fallen into before. We put too much faith in what is digested by others and we don’t think for ourselves. We believe AI when we should verify what AI tells us...
If you want to discuss AI and how to architect it safely, please feel free to reach out (Contact Us)!
That’s all for today...Good luck my security super soldiers, and may all your downtimes be planned!