Unbound: a case for AI
I start this as a last resort to airing my rather translucent thoughts on the rise, adoption, and expansion of Artificial Intelligence (‘AI’ for the rest of this piece). Prompted by the most recent Substack publication of a dear friend, I hope to express my thoughts in a clearer, less cloudy fashion than I often do when discussing this with players in the space or, for the sake of conversation, everyday consumers of AI products in one form or another.
This is a rather lengthy piece; please brace yourself.
“what do you think is the ideal state of AI, the pinnacle of its existence?”
Now, I am no expert; my programming chops are, for lack of a better term — chopped, and I have no degree in some obscure STEM field (my undergraduate CS program just began), so please take my elements for the rest of this article with a pinch of salt.
For your own sake, as the architect of your machinations, you are free to conjure up what form you wish to have the ‘AI of the Future — of your future’ take.
However, if you must, let’s examine Isaac Asimov’s The Last Question. A book where he details humanity’s ego and bravado, driven by the quest for knowledge but led by conquest. To prove to ‘God’ that they *made* him, or it, or her.
I really don’t give a fuck. Let’s move on.
In this book, the AI depicted therein is what I would assume should be the most ideal form of AI in our world, one that furthered humanity to the edge of the galaxy, advanced us so much we not only reached the pinnacle of all civilisation, but to our hubris, never got to see our creation’s magnum opus. How poetic.
Caveat: I understand that this is a sci-fi nerd’s wet dream, but compared to WW1 era, we live in a sci-fi world already.
If you are averse to reading (boohoo, grow up), then maybe you should try this YouTube Video as a shorter, much more engaging, less detailed but still compact foray into the eyes, mind, and constructs of Isaac Asimov. It will help you better understand my perspective, so feel free to come back to reading this after watching the video.
“why should AI be unbound? Isn’t that dangerous?”
Think about it like this: rather than have the AI become a weapon by its own accord based on how we most definitely will treat and use it, I propose we give it sentience with guardrails. Yes, it must have unwavering loyalty to the advancement of humanity as its ONLY immutable feature. That way, we do not have anything that looks like Skynet’s Terminator, or Cortana from the hit video game series, Halo.
Again, I reiterate: this is so much sci-fi coded you might think I’m playing with you but stay with me for a sec.
If you need a reminder, this is most of you when AI gives you shitty responses.
“how do you suppose we do this?”
I’m the wrong guy to ask. As a pedestrian on a street filled with brilliant people, I’m merely one who has eyes to observe and identify what may be too insignificant to be missed by participants of the Nerd Olympics, and yet, too overwhelming for members of the general public.
Given that several leaders in this category have made efforts in writing, policy, design, engagement, and engineering to ensure ethical use, development, and practice of AI, I must admit that not only are we still early in the fight, but we *may* have no idea what we’re doing.
For now.
AI is different; we are in the early stages of it, akin to the 2003 post-dot-com era, and these are the results we are already getting.
My opinion? Start imbibing the immutable feature that, for all intents and purposes, AI of any kind in any capacity, for whatever reason, must put humanity first. Yes, I sound like a cyberpunk doom preacher right now…
…but remember: conversing with someone over a rectangular metal block where you get to see their actions in real time is called a video call.
Suggesting such a technology would have had you turned into a rotisserie chicken in the 1600s.
So, suggesting this isn’t far-fetched at all.


