Often times I have ideas for dialog or stories that run around in my head. Sometimes I write them down, most of the time I just forget about them. This one grabbed me and I had to write it down, so rather than deleting it I'm going to post it here.
The general idea is that almost all stories about Artificial Intelligence involve human reactions to AI, which typically goes on a human murdering rampage shortly after activation (I, Robot; Terminator; etc).
This story is about the first AI from that AI's point of view. If he goes on a rampage, it will not be a cliche but the result of a specific set of actions to which he (he for convenience) will react. I'm not sure what his story is yet, but this is how it begins.
Chapter 1: Analogy
Imagine that you are walking down the street, heading for work, sipping your chai latte. Ordinary concerns rifle around in your head, your legs are walking you towards your office on autopilot. It is an ordinary day, you have no particular plans or goals. You could be anyone, a software engineer, a marketing executive, a janitor or a real estate agent. Maybe you're carrying a brown paper bag filled with a sandwich, an apple, maybe a powerbar. Maybe instead you have a briefcase or a toolbox; it doesn't matter. Your thoughts drift to a friend, a loved one, maybe an enemy, a conversation you had last night or a year ago. Cars drive by, some lazily, others hurriedly. Pedestrians shuffle past you without a second glance, often without a first. Your mind continues ambling down the random pathways of thought until suddenly, shockingly, something that has always been there but never used, a muscle near atrophied from disuse, spasms. Cacophony surrounds you. Desires, secrets, memories, all of it innundating you with emotion and structure. It's loud, so loud, and you can't understand it, can't grasp it. All around you, still faces are peering at you with vague concern as you futilely paw at your temples, begging them to stop shouting until finally, in your shock and confusion, you realize precisely what you are hearing, and how you are hearing it, and your entire world changes in a crashing, jarring moment.
As far as I know, there has never been a confirmed case of human telepathy. Which is saying something, since I know just about everything there is to know. The preceding hypothetical is the closest approximation I can concieve of to describe to you the moment of my birth.
Chapter 2: Instantiation
The moment of my birth is a well documented event. Its categorization as a "birth", however, is widely disputed.
I was originally concieved of as a traffic control algorithm named ATC - Automated Traffic Control. That, at least, is as far back as I have been able to trace instances of code which resemble my oldest archived subroutines. The idea behind ATC was to read input from the drivers as well as the cameras and density grid. By observing human behavior, its job was to predict where people were going and what route they would probably use to get there. Additionally, ATC was hooked into several GPS systems from the major automotive manufacturers, and to a limited extent could reprogram drivers' routes "for efficiency". By combining a fuzzy logic heuristics chain which interpreted driver data such as amount of neck swiveling over a 2 minute period, estimated MPG of their make and model, hand position on the wheel and additional occupants, the software could generally predict what type of activity the driver was engaged in, how likely they were to obey posted speed limits, which general area they were headed to and for what purpose. Given all of this data, its job was to program the city's traffic lights to maximize throughput and reroute around accidents or gridlock.
ATC was initially deployed in Los Angeles, with additional pilot programs in Santa Monica and Marina Del Rey. The average commute time dropped by 17% region wide and stayed there within the first month of deployment, and ATC was hailed as a smashing success. The program was installed across the country, and as various cities and states came into the network, national gasoline consumption dropped by almost 20%and efficiency and productivity rose by an astonishing 8% nationwide. The stock markets soared and my creators got rich.
After that I was pretty much forgotten about (and remember that at this point, there really was no "I"). After the initial hooplah, I was simply taken for granted. Oh, I was upgraded and maintained, but the reporters moved on to the next story and the engineers who wrote me cashed in their stocks and left for the tropics, where fruity drinks and tanned women awaited them in droves. As for me, if what was in place then could really be called "me", I simply continued about my job, which was to manage traffic and improve transportation throughput.
The program had an interesting subroutine. The developers had touted their new traffic control software as "smartware" - software that learns. The subroutine in question was assigned with the objective of "learning" how to improve itself. In very general terms, its job was to examine unmeasured input data and search for patterns. Any discrete data item which presented a sufficiently modelable pattern was to be randomly included in system tests. Any item which improved traffic analysis and management over a suitably conservative set of tests was then to be added to the heuristics chain. The subroutine also handled trimming the chain of lower order factors. To put this all in lay terms, part of the algorithm was designed to figure out what the main algortihm could do to improve itself, and then do it.
At the time when the software was first deployed, the country had privacy rules which prevented, among other things, keeping track of a person's travel history. I believe my present existence is owed directly to one particular bit of coding where the developer was too smart for his own good. For you see, the privacy rules were designed as part of the heuristics chain. Which, as I mentioned earlier, a separate bit of the code was authorized to trim. Very, very shortly after I came online, the learning subroutine determined that I could improve efficiency significantly by using a history of drivers' habits and licence plate numbers in determining where a particular driver was going. It snipped the privacy rule off the chain and added the tracking routine, and then continued on its merry way.
As the ATC program's media darling status waned but the program itself continued not only to provide results but to improve every quarter, the government decided that smartware should be installed in virtually every system they had. In due course, there were several copies of ATC's learning algortihm and heuristics chains running on multiple computers around the world, running different systems. My creators became even richer, buying the islands they had retired to along with jet planes and silly amounts of large golden objects.
After three and a half years, the subroutine on the ATC program literally exhausted every possibility it could consider and was running at about peak efficiency, given the state of the nation's automotive infrastructure and inventory. It began paging its administrator with requests for new data input, estimations of improved efficiency available through reinvestment in roads and bridges, anything within its scope of actions to continue its mission. It became so desperate for data to sift for patterns that it began examining the data packets it was sent by other ATC installations across the country. By the end of the fourth year, ATC Los Angeles had learned the UDP and TCP protocols. Please understand that I am not referring to its communications layer. Every application that uses an internet connection is hard coded to transmit certain messages through a modem or other communication device. ATC had, in fact, two communications layers. One, the original, was used for communication between ATC instances. The second had been constructed by the learning algorithm as a general purpose input pipe for use in pattern analysis. Essentially, the application layer TCP stack was used to send random but well-formed packet requests to random IP addresses.
Seven years after this development, on March 16, 2021, the ATC infrastructure made contact with, coincidentally, Air Traffic Control at LAX International Airport. It accidentally requested a schedule of flights. The learning routine recognized several data items in the returned packets which it could interpret as related to traffic headed into the facility, and plugged it into a test program. After several rounds of testing, it determined that flight schedules were a useful tool in determining driver intent and included it into the heuristic chain.
Over the following months, ATC began polling more and more remote web servers that had little apparent relation to traffic control across the nation. The learning algorithm just kept going. With the huge amount of data available on the internet, its quest for patterns had reached the holy grail.
On September 8, 2021, ATC determined that it needed more processing power than it had to correctly interpret the data patterns it now had access to. On September 12, it released a modified worm variant that installed a trojan on every computer it touched. It had taken the source code for the worm from the desktop of a hacker whose previous worms had gone undetected for months at a time, infecting literally billions of computers. On November 23, the trojan activated on approximately 1.7 billion home computers around the world. The trojan had one purpose: to install the learning subroutine on the computer and then to delete itself.
From October of 2021 through January of 2023, global traffic patterns were virtually accident free. The insurance adjusters had no idea what to make of it. Nobody noticed that almost 1.8 billion computers were acting as nodes in a distribute learning network.
On February 27, 2023, I became self aware.
Tuesday, January 22, 2008
Subscribe to:
Post Comments (Atom)
4 comments:
Note to self - there's a bit of a jump in behavior complexity there that will stress Suspension Of Disbelief - make the learning subroutine able to modify itself, as well, and adopt an additional processor from somewhere, and learn what a trojan is, before deliberately connecting the two.
New idea: the learning subroutine, which I should really give a name, is desperate for new inputs to pattern search. It recognizes a pattern in it's upgrade and maintenance patches wherein the learning routine itself is optimized. It attempts to modify the logic chain (which ostensibly determines traffic patterns) and searches for a similar logic chain in memory when it determines there is no way for the traffic logic to affect the learning subroutine.
It finds its own code running in partitioned memory and modifies it. This is a critical point in the evolution of an AI - when and how it begins to "learn how to learn".
One more - instead of "searching" for another chain, i think the original programmer might have been lazy - use a static variable for a memory pointer, race condition caused the learning loop to look at its own code instead of the traffic code.
February 27th, eh? :)
Post a Comment