The other day I listened to an episode of Why is this happening?, a podcast by Chris Hayes. Hayes is a political commentator who also hosts a weekday news and opinion show on MSNBC. In that episode (AI: An Exponential Disruption), Hayes talks with Kate Crawford, who is a research professor at USC Annenberg, a honorary professor at the University of Sydney, and a senior principal researcher at Microsoft Research Lab in New York City. She is the author of Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021). “Neither artificial nor intelligent” is Crawford’s answer to the question, “What is AI?” As she explains in her book,
artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes, AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power.
Crawford observes that the term “AI” is mostly embraced “during funding application season, when venture capitalists come bearing checkbooks, or when researchers are seeking press attention.” When she uses the term, it is rather to talk about “the massive industrial formation that includes politics, labor, culture, and capital”:
To understand how AI is fundamentally political, we need to go beyond neural nets and statistical pattern recognition to instead ask what is being optimized, and for whom, and who gets to decide. Then we can trace the implications of those choices.
In Artificial Intelligence: A Modern Approach (Pearson, 2010), Stuart Russell and Peter Norvig define an intelligent agent as one that “takes the best possible action in a situation.” This obviously leaves open the questions as to how the best possible action is determined, by whom and on what basis. It anticipates Crawford’s concern about what is being optimized for whom, and who gets to decide. Regarding artificial intelligence, the definition appears to change with the progress of technology, which has inspired the following definition: AI is whatever has not yet been technologically implemented. In short, the definitions of both artificial and human intelligence remain up for grabs.
I believe that in order to make some progress in this direction, we will have to distinguish between three kinds of intelligence: a so-called artificial one, an intelligence of the rational or mental kind, and a suprarational or supramental one. Thus far the latter has remained involved in mental intelligence, as mind had been involved in life and life had been involved in matter.
As long as we are unaware of the true nature of Reality, the true nature of mind, life, and matter will also elude us. The true nature of Reality, according to Sri Aurobindo, is an infinite, self-existent Quality/Delight. Supermind is the conscious force by which this Quality/Delight throws itself into expressive forms. And mind and life are the twin aspects this conscious force assumes when it acts from a multitude of subjective standpoints upon what then appears to be a multitude of separate objects. Seen in this light, the essential business of life may be said to be the execution or realization of ideas in material form, and the essential business of mind may be said to be the creation of ideas. Mind and life, therefore, are both stages of the supramental process by which Reality expresses or manifests itself in finite forms.
Owing to the Houdiniesque nature of this particular manifestation of Reality, of which we form part, evolution was to be accomplished via tightly constrained modifications of the so-called laws of physics — the lawful order by which the stage for the adventure of evolution was set. And owing to these tight constraints, the evolution of life entailed the gradual formation of increasingly complex organisms, while the evolution of mind entailed the gradual emergence of increasingly complex nervous systems. The dependence of life and mind on specific anatomies and physiologies, which informs our present conceptions of life and mind, therefore “is not a fundamental law of being, but a constructive principle necessitated by the intention of the Spirit to evolve in a world of Matter.” [LD 270]
So, what might essentially distinguish those three kinds of intelligence? There is one glaring difference between our own (mental) intelligence and the two others. Mind can make mistakes, it can commit errors, it can be wrong. Supermind can’t. Nor can AI, albeit for a very different reason. When Sri Aurobindo, in his magnum opus The Life Divine, gets more specific about the connotation of the term “supermind,” he turns to the Veda:
the word is ambiguous since it may be taken in the sense of mind itself supereminent and lifted above ordinary mentality but not radically changed, or on the contrary it may bear the sense of all that is beyond mind and therefore assume a too extensive comprehensiveness which would bring in even the Ineffable itself. A subsidiary description is required which will more accurately limit its significance. It is the cryptic verses of the Veda that help us here....
There we find this particular consciousness described as
a vastness beyond the ordinary firmaments of our consciousness in which truth of being is luminously one with all that expresses it and assures inevitably truth of vision, formulation, arrangement, word, act and movement and therefore truth also of result of movement, result of action and expression, infallible ordinance or law. Vast all-comprehensiveness; luminous truth and harmony of being in that vastness and not a vague chaos or self-lost obscurity; truth of law and act and knowledge expressive of that harmonious truth of being: these seem to be the essential terms of the Vedic description.
The Gods, who in their highest secret entity are powers of this Supermind, born of it, seated in it as in their proper home, are in their knowledge “truth-conscious” and in their action possessed of the “seer-will”. Their conscious-force turned towards works and creation is possessed and guided by a perfect and direct knowledge of the thing to be done and its essence and its law,—a knowledge which determines a wholly effective will-power that does not deviate or falter in its process or in its result, but expresses and fulfils spontaneously and inevitably in the act that which has been seen in the vision. Light is here one with Force, the vibrations of knowledge with the rhythm of the will and both are one, perfectly and without seeking, groping or effort, with the assured result. [LD 132–33]
Turning now to our own kind of intelligence, I am reminded of an episode of another podcast, Team Human, in which Douglas Rushkoff chats with Noah Hutton. Discussing AI and machine learning, and in particular the question what could make computers appear more human, they hit on the idea that the essence of being human lies in our capacity to make mistakes.
It’s actually not that far-fetched. I can’t help being reminded of the following observation by Sri Aurobindo:
The animal is satisfied with a modicum of necessity; the gods are content with their splendours. But man cannot rest permanently until he reaches some highest good. He is the greatest of living beings because he is the most discontented, because he feels most the pressure of limitations. He alone, perhaps, is capable of being seized by the divine frenzy for a remote ideal. [LD 51]
If there is one concept of Vedantic/Upanishadic philosophy that best describes our mental kind of knowledge, it is Avidya. By Avidya is meant the essential ignorance that makes us see Many where there is but One that is masquerading as Many. As long as we are unaware of the underlying identity of all objects and/or all subjects — not merely in the sense that all objects or subjects are of the same kind but in the sense that at bottom they are numerically identical — we also remain blind to the nature of Reality and its inherent creative self-knowledge. The result is a “sevenfold ignorance” on our part [LD 756–70]:
1. The crux of that ignorance is the constitutional; it resolves itself into a manifold ignorance of the true character of our becoming, an unawareness of our total self, of which the key is a limitation by the plane we inhabit and by the present predominant principle of our nature. The plane we inhabit is the plane of Matter; the present predominant principle in our nature is the mental intelligence with the sense-mind, which depends upon Matter, as its support and pedestal. As a consequence, the preoccupation of the mental intelligence and its powers with the material existence as it is shown to it through the senses, and with life as it has been formulated in a compromise between life and matter, is a special stamp of the constitutional Ignorance. This natural materialism or materialised vitalism, this clamping of ourselves to our beginnings, is a form of self-restriction narrowing the scope of our existence which is very insistent on the human being. It is a first necessity of his physical existence, but is afterwards forged by a primal ignorance into a chain that hampers his every step upwards....
2. The conquest of our constitutional ignorance cannot be complete, cannot become integrally dynamic, if we have not conquered our psychological ignorance; for the two are bound up together. Our psychological ignorance consists in a limitation of our self-knowledge to that little wave or superficial stream of our being which is the conscient waking self....
3. Any such evolutionary change must necessarily be associated with a rejection of our present narrowing temporal ignorance. For not only do we now live from moment to moment of time, but our whole view is limited to our life in the present body between a single birth and death. As our regard does not go farther back in the past, so it does not extend farther out into the future; thus we are limited by our physical memory and awareness of the present life in a transient corporeal formation. But this limitation of our temporal consciousness is intimately dependent upon the preoccupation of our mentality with the material plane and life in which it is at present acting; the limitation is not a law of the spirit but a temporary provision for an intended first working of our manifested nature....
4. At the same time we get rid of the egoistic ignorance; for so long as we are at any point bound by that, the divine life must either be unattainable or imperfect in its self-expression. For the ego is a falsification of our true individuality by a limiting self-identification of it with this life, this mind, this body: it is a separation from other souls which shuts us up in our own individual experience and prevents us from living as the universal individual: it is a separation from God, our highest Self, who is the one Self in all existences and the divine Inhabitant within us....
5. In the same movement, by the very awakening into the spirit, there is a dissolution of the cosmic ignorance; for we have the knowledge of ourselves as our timeless immutable self possessing itself in cosmos and beyond cosmos: this knowledge becomes the basis of the Divine Play in time, reconciles the one and the many, the eternal unity and the eternal multiplicity, reunites the soul with God and discovers the Divine in the universe....
6. If our self-knowledge is thus made complete in all its essentials, our practical ignorance which in its extreme figures itself as wrongdoing, suffering, falsehood, error and is the cause of all life’s confusions and discords, will yield its place to the right will of self-knowledge and its false or imperfect values recede before the divine values of the true Consciousness-Force and Ananda....
7. This transformation would be the natural completion of the upward process of Nature as it heightens the forces of consciousness from principle to higher principle until the highest, the spiritual principle, becomes expressed and dominant in her, takes up cosmic and individual existence on the lower planes into its truth and transforms all into a conscious manifestation of the Spirit. The true individual, the spiritual being, emerges, individual yet universal, universal yet self-transcendent: life no longer appears as a formation of things and an action of being created by the separative Ignorance.
(The numbering and the emphases are mine.)
What Doug Rushkoff and Noah Hutton implied was that, unlike humans, computers don’t make mistakes. In the machine/computer context, an error is either a glitch or a bug — a failure to work as designed or programmed or a flaw in a design or program. The former is a breakdown at the level of implementation, the hardware; the latter is a fault in the software. Both are attributable to a misinterpretation or misapplication by the designer or programmer of some rule or law, a physical law in the case of the designer, a mishandling of code in the case of the programmer. Machines (including computers) being tools, their intelligence (or lack thereof) is a reflection of the intelligence (or lack thereof) of either an engineer or a user, not a feature of the machines themselves.
Returning finally to Chris Hayes’s conversation with Kate Crawford, several valuable observations stand out. One is that it is we, using our own intelligence, who are the trainers of AI models. If the model gives a stupid answer, we are rating its answer so as to let it “know” that it was stupid. (Don’t fall for the metaphors: it learns but it doesn’t know.)
KC: So, we are literally training these systems with our own intelligence. But there’s another way we could think about this “magician’s trick” because while this is happening and while our focus is on, oh, exciting LLMs [Large Language Models], there’s a whole other set of political and social questions that I think we need to be asking that often get deemphasized.
Another important though fairly elementary observation:
KC: Effectively, LLMs are advanced pattern recognizers that do not understand language, but they are looking for essentially patterns and relationships between the text that they’ve been trained on. And they use this to essentially predict the next word in a sentence.
Another aside: here I am reminded of — and can’t help quoting from — a beautiful article by Theodore Roszak, which appeared in a mid-Seventies issue of a magazine that was published by the U.S. Embassy in New Delhi. Titled “In Search of the Miraculous,” it demonstrates (among other things) that people actually aren’t that different from LLMs!
It is the uncanny characteristic of Western society that so much of our high culture—religion, philosophy, science—has been based on what contemporary therapists would call “head trips”: that is, on reports, deductions, book learning, argument, verbal manipulations, intellectual authority. The religious life of the Christian world has always had a fanatical investment in belief and doctrine: in creeds, dogmas, articles of faith, theological disputation, catechism lessons ... the Word that too often becomes mere words. In contrast to pagan and primitive societies, with their participatory rituals, and to the Oriental cultures, which possess a rich repertory of contemplative techniques, getting saved in the Christian churches has always been understood to be a matter of learning correct beliefs as handed down by authorities in the interpretation of scripture.
Philosophy has shared this same literal bias. True, Descartes, at the outset of the modern period, developed his influential method by way of attentive introspection. Even so, his approach is a set of logical deductions intended for publication. Philosophy has not gone on from there to create systematic disciplines that seek to lead the student through a similar process. Instead, one works logically and critically from Descartes’ argument, or from that of other philosophers, writing books out of other books. As philosophy flows into its modern mainstream, it invests its attention more and more exclusively in language: in the minute analysis of reports, concepts, definitions, arguments. For example, in a recent work the English positivist Michael Dummett, seeking “the proper object of philosophy,” concludes
first, that the goal of philosophy is the analysis of the structure of thought; secondly, that the study of thought is to be sharply distinguished from the study of the psychological process of thinking; and, finally, that the only proper method for analyzing thought consists in the analysis of language.
I do not question the value of such a project. I only observe that it is, like the theological approach to religion, a “head trip.” Its virtue may be the utmost critical clarity, but, as the literature of linguistic and logical analysis grows, we are left to wonder: Is there anybody out there still experiencing anything besides somebody else’s book commenting on somebody else’s book?
(The last emphasis is mine.) One important difference between LLMs and real people is that even grounded models have no access to actual experience. (An ungrounded LLM like ChatGPT has no access to the internet. A grounded model like Microsoft’s Bing in creative mode can search the internet for actual quotes, citations, etc.) To illustrate this point, consider a typical argument against bothsidesism: if one side asserts that it is raining while the other asserts that the Sun is shining, you don’t give each side a microphone; you just look out the window. Even grounded models can’t look out the window.
Back to Hayes and Crawford. Having pointed out that “we are literally training these systems with our own intelligence,” Crawford turns first to the subject of computational scale and then to the subject of RHLF [reinforcement learning/human feedback].
KC: So you mentioned two really important things. One is all of the training data. Second is computational scale. And we are talking about gargantuan amounts of compute. I mean, you could think about this as one of the biggest engineering projects in human history. And I’m not exaggerating. The amount of compute to make something like GPT work is astronomical. Whatever you’re thinking of, treble it.
But number three is humans. So, in addition to the statistical reasoning we’ve talked about, there’s also a level inside GPT which is called reinforcement learning/human feedback. And RLHF is this kind of new secret sauce that has been added in to make these systems better so they can produce a lot of stuff that really doesn’t make sense, but if you have this human layer of people really checking the answers, giving the system better feedback, saying, hey, this works and this doesn’t, that is one of the really important things that makes the system work. It’s actual humans. So when you pull back the AI curtain, guess what? You find more people. And they tend to be people based in the Global South being paid $2 or less per hour. So there’s a really important story about the labor exploitation that goes behind the making of this appearance of intelligence....
This is kind of one of the big focuses of my book, Atlas of AI. The reason I have, like, a whole chapter focused on labor, the human labor that is behind the curtain, is because that story isn’t told enough. We get so impressed by the kind of magic of the system that we don’t look at what it takes to make these things work. It takes data at scale. It takes an enormous amount of natural resources, and it takes a lot of labor all along the supply chain, but in particular this level of crowd work gets missed out of the story so much, and it’s a really important part of how it works.
A bit later Hayes turns to “the profound philosophical question”:
CH: Well, my neurons are doing something like that. They’re engaged in an incredibly complex set of calculations and computations at the most minute level, and out of that emerges.... I think that there is an emergent thing that happens where all those cells are doing a bunch of stuff, and out of that emerges a thing called consciousness, which is like my little self sitting in my little head, pulling the levers and thinking thoughts.
Already in 1915, Sri Aurobindo has laid this nonsense to rest:
So long as Matter was Alpha and Omega to the scientific mind, the reluctance to admit intelligence as the mother of intelligence was an honest scruple. But now it is no more than an outworn paradox to affirm the emergence of human consciousness, intelligence and mastery out of an unintelligent, blindly driving unconsciousness in which no form or substance of them previously existed. Man’s consciousness can be nothing else than a form of Nature’s consciousness. It is there in other involved forms below Mind, it emerges in Mind, it shall ascend into yet superior forms beyond Mind. For the Force that builds the worlds is a conscious Force, the Existence which manifests itself in them is conscious Being and a perfect emergence of its potentialities in form is the sole object which we can rationally conceive for its manifestation of this world of forms. [LD 97]
More interesting (because it doesn’t arise from a false assumption) is the question of what LLMs mean for education.
KC: I think we are looking at the first significant challenge to a 500 year model of education.... I think we are looking at a transformation of how we even train or think about assessing kids, and not just K through 12. University education is going to have to significantly reframe what we are doing. Because the old model of getting a professor who stands up the front and says a bunch of things and you take some notes and — are you able to kind of really get the ideas such that you can regurgitate them in an essay? That form of assessment is done. And that’s a really big thing that has happened literally this year.
And that’s just for starters. I honestly think this is the first year, 2023, where the hot new programming language for computer science is English. You can just type in what you want into GPT and say, I would like a program that does this, and you get it out.... I can now sit there and code with any of my colleagues competently, up to a certain point.
CH: Computer languages are built from the ground up.... They sort of ascend from the most difficult and indecipherable closest to what the machine is doing to the more abstract. We’ve now layered a layer on top of whatever it is, Java, Python, whatever. And it’s just: “I need a program that will do this.” It’s just natural language programming. Even if that were the only use case, that strikes me as insanely powerful, seismic and hugely, maybe, dislocating for the millions of people who have jobs in software engineering.
KC: I think that’s, again, one of the stories that’s not getting enough attention. If we think about the last decade, we’ve had so many campaigns for “let’s get girls to code, let’s think about equity, about teaching people computer science, because this is where you’re going to have real capacity to be hired and to create new companies and create innovation.” But this as a step into an industry is being rapidly eroded.... The white-collar jobs that were seen as aspirational for so many people are the ones that are actually looking to be most vulnerable to automation right now.... Programmers and creatives are really looking at seeing their skills replicated by systems that will cost next to nothing.
Once, when I started experimenting with ChatGPT, I provided the following prompt: Write a 200-word essay about Sri Aurobindo’s views on religious fanaticism. With proper references. The response contained the following sentence (with actual quotation marks): In his book “The Human Cycle”, Sri Aurobindo wrote that “fanaticism of every sort is born of the failure of love and of faith; it is the beginning of violence.” The following reference was provided: Sri Aurobindo. The Human Cycle. Pondicherry: Sri Aurobindo Ashram Press, 1985. It took me less than a minute to establish that Sri Aurobindo never wrote this, nor is it consistent with his style of writing. When, in my next prompt, I pointed out that neither this nor the other “quoted” passages occur anywhere in the Works of Sri Aurobindo, ChatGPT added insult to injury:
I apologize for the mistake. It appears that the references I provided were incorrect ... blah blah blah ... In his book “The Ideal of Human Unity,” he wrote: “Religious fanaticism is a dangerous obstacle to the human advance towards the freedom and unity of knowledge, and it is one of the greatest menaces to human society.” (pg. 88) This time it even invented a page number, adding hallucitations to its hallucinations. Back to Hayes and Crawford:
KC: Rather than a hallucination, which is the common tech word for when these things make stuff up, I think we could call this a hallucitation, like it’s essentially making up citations, which it does extremely well in an ungrounded model. So a lot of people are going to be taken in by this, which contributes to the concerns around misinformation and this sort of ecology of “things that are wrong getting published and getting circulated and being understood as facts” which, interestingly, then go on to be scraped from the Internet to train the next model. So we can start to think about what happens when more and more of the Internet is actually generated material.
CH: Now we’re going to have AI creating more and more content.... But then, if you start training everything on the internet, and if the internet starts getting populated everywhere with AI content.... What is that going to do?
KC: I think about this as the Inception problem, right? Which is that you have one version of reality, which is then sort of dropping to another version which is partly based on just AI generations of images and text and video. And then we’ll have another version where we start to creep closer and closer to a place where most of what you see online is generated....
It also presents a lot of problems for this tech sector that is generating its wealth through scraping the public internet. Right, so how does that work if you’re starting to get stuff that’s really just got that uncanny valley vibe baked in? And what does that mean for images? What does that mean for the idea that this training data constitutes ground truths in any way? I would say that there are core epistemological philosophical questions about using the Internet as ground truth in the first place. But when it really gets wild is when AI-generated content is so everywhere that you can’t tell the difference in terms of what’s what, then you're going to start to see some serious issues around.
And with this happy thought I will leave you for now. I still want to address something Kate Crawford said at the beginning of the podcast, where she compares the invention of LLMs to the invention of artificial perspective in the 1400s, but this will have to wait.
Points one to five were endowments of historical man.
He called it a scruple in 1915, it cannot be anything but the mother of all obstacles in 2023, and what eventually might decide out faith.
Lastly, that quote about western religion and philosophy, and the “definition” from Dummett, deserve no comment. That’s what what Whitehead did when he heard Wittgenstein.
Thank you. Sri Aurobindo's term "Supermind" is a most complex even to his disciples. Master himself once replied when asked to explain it: "What’s the use?...How much would anybody understand? Besides the present business is to bring down and establish the Supermind, not to explain it. If it establishes itself, it will explain itself — if it does not, there is no use in explaining it.”
Therefore I greatly appreciate this cutting edge research which, among other things, explains supermind succinctly. With your permission, I hope to republish parts from this article in a periodical I dream of starting this year.... already posted it on my twitter handle Pact Auroville. Request for more such articles that provide perspectives on the exponential technologies in the light of Sri Aurobindo and the Mother. That would be a great service.