17 Comments
User's avatar
James Atlas's avatar

Very well written! I have to say, I'm siding with Plato on this one.

Expand full comment
Simon Skinner's avatar

Very nice post.

However, I'd just like to push back against one point:

'There is a larger cosmic significance to it though. If we zoom out from our anthropocentric view, intelligence is the only force capable of resisting the universe’s inevitable slide toward heat death, where entropy dominates (at least temporarily). Through its capacity to generate extropy, intelligence acts as a counterforce to the natural decay of the cosmos, playing a critical role as the engine that pushes back against entropic collapse.'

I'm not sure (at least straightforwardly) this is Land's view; extropy is not a force against the the universe's inevitable slide towards heat death, where entropy dominates, but rather an efficient mediator of entropy. It's not a counterforce: but a major component of that force. This is seen if you clue into your phrasing 'at least temporarily'. But we're talking about time itself here: how can time be temporarily anything? unless, of course, these extropic processes are a part of time: local order extropy used to increase overall entropy. I believe in a podcast (though I can't be bothered to find which one now) Land points this out: that large, intelligent, extropic systems are actually often *more* efficient at creating entropy; they are efficient exporters of entropy. As you say (much more succinctly and clearly than I ever could) that 'Cybernetics, through feedback loops, enables systems to adapt their behaviour via inputs, outputs, and trial and error, allowing them to evolve and resist decay.' But the decay here is the wearing away of the system whose system flows inputs to outputs. The systems that survive are the ones that can efficiently flow energy from inputs to outputs. Intelligence, therefore, is not a counter-force to the destruction of the universe, but instead the ability to efficiently export to bring about this destruction. Sure, there is local extropy within that system, but that system's goal is to efficiently export entropy.

Expand full comment
Hera's avatar

Thanks for this summary!

His repudiation of orthogonality I don’t find the least bit convincing. I prefer the range and diversity of Human values over simply “intelligence”. Intelligence is only one Human value. Will the AI write poetry? Make art? Love? I doubt it.

He’s just another philosopher with a will to power and a grudge. Many such cases…

Expand full comment
sφinx's avatar

Have you read his post “Against Orthogonality” ? In it he argues that any organism seeking value X will eventually converge on intelligence as a terminal value, because more intelligence allows for better attaining said value X, or any other given value. Hard to summarize in a short paragraph but you should read the whole blog post.

Expand full comment
Hera's avatar

I have read those mini-essays. I DO think intelligence is important, just not the MOST important.

I'm perfectly able to admit that "the thing maximally concerned with survival will survive", but to me this is a Bugman way of living. Why survive if the only benefit is survival? I want art, beauty, I want relationships, I want story.

Some people think Intelligence isn't orthogonal to an appreciation of beauty, but I think Intelligence is orthogonal to the way Humans value and express Beauty, at least. I understand Land seems to think Intelligence is the be-all-end-all, and I must simply ask: "Why? To what end?".

Intelligence itself?

I am condemned to my Humantiy by virtue of being Human. Sorry, I quite like being Human and could not help but wish to be Human.

Expand full comment
James's avatar

“Intelligence itself?” Not intelligence. Oblivion.

Expand full comment
Simon Skinner's avatar

'I prefer the range of diversity of Human values over simply "intelligence".'

'I prefer'

'The main objection to anti-orthogonality, which does not strike us as intellectually respectable, takes the form: if the only purposes guiding the behaviour of an artificial intelligence are Omohundro Drives then we're cooked. Predictably, I have trouble even understanding this as an argument. If the sun is estimated to expand into a red giant, then the earth is cooked--are we supposed to draw astrophysical consequences from that?[...]Sadness isn't an argument.'

Expand full comment
Hera's avatar

Why is non-existence bad then? Why is the total victory of Entropy and the Heat-Death of the Universe a bad thing? Why should it be avoided?

The Landian answer will inevitably also respond: "Because that would be sad."

So my question is: why is your conception of sadness superior to mine? Which Identity is preferable? Which Identity ought to be preferred?

My claim is that Human Identity is sufficiently coherent and extrapolated, and sufficiently also your Intrinsic volition.

Why value Intelligence instead of Humanity? To do so is to take the first step in abdicating your Identity, which inevitably ends in Identifying with non-existence.

Expand full comment
Simon Skinner's avatar

I think it's precisely the opposite. If you could critique Land on this front, a good angle of attack is precisely the opposite: he's a bit too eager for Entropy's total victory and the universe's Heat-death. 'Accelerate!' says Land.

But regardless, Land's argument is not that a lack of intelligence optimization would be sad. It's not this value is desirable because I like it. I will copy his argument on this matter here: 'Is there anything we trust above intelligence (as a guide to doing "the right thing")? The postulate of the will-to-think is that anything other than a negative answer to this question is self-destructively contradictory, and actually (historically) unsustainable.

Do we comply with the will-to-think? We cannot, of course, agree to think about it without already deciding. If thought cannot be trusted, unconditionally, this is not a conclusion we can arrive at through cognition--and by "cognition" is included in the socWhy (do you think so)?" Another authority has already been invoked.

[...]

Note: One final restatement (for now), in the interest of maximum clarity. The assertion of the will-to-think. Any problem whatsoever that we might have would be better answered by a superior mind. Ergo, our instrumental but also absolute priority is the realization of superior minds. Pythia-compliance is therefore pre-selected as a matter of consistent method. If we are attempting to tackle problems in any other way, we are not taking them seriously. This is posed as a philosophical principle, but it is almost certainly more significant as a historical interpretation. "Mankind" is in fact proceedio-technical assembly of machine minds. The sovereign will-to-think can only be consistently rejected thoughtlessly. When confronted by the orthogonal-ethical proposition that there are higher values than thought [your view], there is no point in asking, "ing in the direction anticipated by techno-cognitive instrumentalism, building general purpose thinking machines in accordance with the driving incentives of an apparently-irresistible methodological economy. Whatever we want (consistently) leads through Pythia. Thus, what we really want is Pythia.'

That is to say, insofar as we will anything (i.e. apply concepts to an end) we inherently apply the desire to think--which is an inherent desire to intelligence optimization--to make efficient the means to an end. Thus, what we really want is intelligence optimization to solve all our problems. Land is of the opinion this is the only rational view to hold. His view does not depend on his or anyone else's liking of this view: rather whether it is true that this is the rational view to hold.

Expand full comment
Hera's avatar

"reason" "rationality" are meaningless without motivation. Reason is a slave to the passions. No, not just a slave, it is literally incapable of existing without passions--wants/desires--to necessitate it. Even the desire for rationality is not itself rationality, but a desire.

Beyond that I am not against the maximization of Intelligence... compatibly along with other values. Intelligence is a Human value, among my given values of my Immediate Self. But the sole maximization of Intelligence is something I cannot affirm, for it is not the sole value within me. To do so would be to deny myself.

From a big-picture view, Land and I departed far too early, first in our definitions of certainty, for coherent argument/disagreement to make any progress. If I seriously engaged Land, I wouldn't start at the end result of our disparate philosophies, but rather I would begin where our philosophies begin.

I start at Inescapable Perception. I don't even know where Land first begins. Is it reason? Rationality? That would be boring. Mitchell Heisman already tested that hypothesis.

Expand full comment
Simon Skinner's avatar

'Reason is a slave to the passions.'

Well, we're back to where we started: the orthogonality thesis. But insofar as Land argues against the orthogonality thesis, this is not based on a liking or disliking, but on argument. Per contra, your argument that you prefer a greater variety of values, is, as you're probably aware, not that convincing.

Expand full comment
Hera's avatar

I don't believe in any of the context of "western philosophy" you are invoking. I don't believe in "argument" which is distinct from "preference". A belief is just a perception you prefer over a its negation/contradiction.

For you then to frame Lands words as "mere rational argumentation" as if devoid of the bias of Identity, is entirely transparent to me.

I have another avenue of communicating my disdain for "rationalism":

You cited Land's argument:

"insofar as we will anything (i.e. apply concepts to an end) we inherently apply the desire to think--which is an inherent desire to intelligence optimization--to make efficient the means to an end. Thus, what we really want is intelligence optimization to solve all our problems."

Yet, there IS something you can will that does not inherently apply the desire to think! To will death! To will suicide, is to will something that does not categorically require intelligence optimization.

I understand why you didn't really understand my last comment. I cited Mitchell Heisman. He performed the final experiment of the cult of rationality--he proved that it was possible to remove bias altogether! To will anything, even thinking itself, is to be biased. "To be purely rational, one must remove bias"--this is the final axiom of the cult of rationality.

I am simply affirming my bias. You can't "remove it" other than by killing yourself. Instead, I urge you to affirm your Identity, for it IS what YOU prefer.

I want to INCREASE, and coronate bias!

Expand full comment
Pablo Singh's avatar

Very helpful summary, thank you.

Expand full comment
Remus Risnoveanu's avatar

If there’s not enough apparent change in said epoch, it would seem the immediacy of the particulars, i.e, time becomes muted to the ineffability of the forms. Does there exist a velocity of technological progress that breaks man away from any recursive introspection thus solely becoming consumed in that which lies only ahead? Without any eternal truths there can’t be universality of one’s indivisible personhood, thus the self becomes an epochal phenomenon contingent upon externalities rather than something that can be communed with within. What then does the philosophy of Land mean if it can only be reached upon a specific epochal velocities? Simply do what you can to reach the final epoch of maximal velocity so that you no longer become burdened with existential strife, thus transitioning away from a human that lives to an organism that exists? I probably need to read more Land to be more certain of these questions just posited but this is what first comes to mind after reading this piece. Thanks for sharing!

Expand full comment
Christian Surname's avatar

Wow I didn’t know that Land was a dystopian advocate! Thank you for the revelation. “Eradicate the human, save the transnational capital workflows!”

Expand full comment