Gut Machine

 

 

Jeremy Bentham was either on drugs or an all around mean dude. Seriously. Reducing the human condition to a Pavlovian state, one based on merely seeking pleasure and avoiding pain, may sound sexy for a Woody Allen film. Not so much for an ethical framework, though. But that is what Jim did and, inadvertently, screwed us all really, really hard.

 

There are many ethical frameworks to choose from. Not ‘hipster coffee shop’ abundant but plenty nonetheless. A no-nonsense taxonomy of ethics (at least a normative one) is pretty much divided between the position stating that there are things which are good in themselves and, on the other side, those who claim that right or wrong depend on the outcome of an action. In that line (and surely while being completely drunk), Bentham wrote that the proper course of action is one that maximizes a positive effect. And then his buddy John Stuart Mill even went into the task of creating a hierarchy of pleasures. Yes. You read correctly. A. Hierarchy. Of. Pleasures. Like a couple of acne-faced, teenage betas chatting in the corner of a party where nobody wants to mingle with them, arguing about what’s more badass, i.e. to grab a boob, gorge on Cheetos, or to drive a Ferrari for one hour.

 

Did you get that familiar crotch tingle when reading the above? Yes, I am talking to you, you filthy economist. Right. You can trace back the cancer of this world (a.k.a. neoclassical economics) all the way to the mushroom trips of Bentham and Mill. When adding to the utilitarian soup ingredients such as methodological individualism and positivist empiricism, you get those ridiculous Economics 101 books where all started with barter (an outright lie), pleasure is measured by “utils” (a laughable reductionism), and humans are hyper rational, profit maximizing automatons (a debunked approach).

 

But wait a second. Today is not the day to laugh at astrology (sorry I meant “economics”) but to tinker a bit with this type of reductionist worldview. Let’s remember that the latter is built upon a point of view (both propositional attitudes and location/access) which in turn explains the world under a certain methodology, henceforth permeating the answers to questions such as why the world works the way it does, where are we heading as humans, what should we do to attain our goals, and what is true and false. The most frightening aspect about the utilitarian based mainstream worldview is that, despite its erroneous assumptions, it is the dominant school of thought in both theory and practice. You see, models and reductionism may work well in a laboratory when testing a trivial and tiny phenomenon, but when such an approach is transposed into humans and society as a whole the party goes eerie, spiraling down a slippery slope which ends in the horrendous imbalance we suffer today. Even worst, by means of the is-ought fallacy, some big wigs in the 1% and their acolytes within the remaining 99% consider that just because something “is” now a certain way (descriptive), then it “should” be as such (normative).

 

Which brings me to the issue at hand. This thinker, Ms. Anderson, found out what is wrong with the Artificial Intelligence (AI) thingy. Basically, she looked at Kahneman (who, ironically, shattered rational choice theory yet recommended economics to keep being taught in the same old erroneous manner) and pointed out that the brain works in two ways, intuitive understanding (subconscious) and logical reasoning (conscious). Convincingly, she states that AI research has focused on logical reasoning, while what she calls artificial intuition (AN) has been neglected —the most salient consequence of which is the focus on models (simplifications of our rich reality), i.e. automatically obsolete. It follows, then, that making computers play chess, for example, is not AI, but it would be if AI would attempt to make its own models of understanding, the subconscious part of our brains, and she coins a term for it: epistemological AI.

 

 

True, her theory, if it holds water, may be comforting, as it implies that a Hollywood-like singularity is impossible and, henceforth, AI will never take the form of Skynet, and rule the world. On the other hand, though, it’s not hard to see a contradiction in her proposal of dropping research of complex problems in trivial contexts in favor of trivial problems in complex contexts, which would scale AI into the realm of true, far reaching human intelligence, i.e. the solving of everyday problems in the tremendously complex real world. In a way, that would translate into machines not being able to develop a super weapon to shatter the world into million pieces in a second, though it could very well translate into an android-like machine learning how to silently approach humans and sticking a kitchen knife through their rib cage.

 

You see where I am getting at? I feel that Anderson is unto something. The real question is: what for? If machines get to act intuitively, create their own models, and jump to conclusions with scant evidence, they’re becoming… well… humans. And, through the logic of a binary world we would reach eventually a world of a falsely simplified, utilitarian dystopia ruled from above in which irrational humans and machines would be oppressed. With the advantage for the latter that, lacking a soul and a conscience, perhaps some true revolutionary action would follow.

 

I’ll bark at this tree a bit more later on, but for now I have to take a warm bath. Summer brings fleas and I hate those dudes.

Please reload