To a large extent, what I’m writing about below is an ooh-shiny, which is, after all, what the eponymous Magpie is all about. But I believe the foundation of the ideas outlined herein to be sound.
There are a lot of things to keep an eye on right now: climate change, overpopulation, the end of growth, the end of oil, the end of jobs, the runaway effects of inequality, the rise of social injustice warriors, the corruption of democracy.
None of that is where we need to aim right now. Far and away the most important thing anyone can work on at the moment is user interface.
Not the mouse and keyboard, mind you. Not the recently-ubiquitous touchscreen. Not even the obvious steps forward in 3 dimensional interfaces. Maybe those matter, maybe they don’t, but they’re not where we need to aim anymore.
The user interface that matters right now is the one that allows human beings to expand their individual processing capacity via artificial means. There are two pieces to that.
The first is technical: how do we do this, or more correctly how much material support and resources do we dedicate to finding the solutions?
The second is economic and ethical: who gets the goods first, and how much do we restrict the addition of resources to individuals? This is an easier question on an individual level, though much harder to reach agreement over.
I would argue that there is no upper limit for how much we should be spending to solve the technical problem, though it’s complicated. We may find bottlenecks in the direct pursuit of the goal that need research in prerequisite areas, and we may thus gain more from diverse spending in technology industries. And there are a lot of different ways to approach progress in general – to wit, the Tesla or even the Local Motors of the world are credited with significant advances in specific areas. They have certainly established the incredible power already available to anyone who wants to make use of tools in a low-overhead, high-demand environment.
Moreover, there are limits to the benefits we can derive from direct investment. The costs of any such program could easily balloon without limit, and rather than reaching the goal we could end up funnelling money into undeserving pockets. This isn’t so much an upper limit, however, as a guide for how not to invest – such a program should be at its heart competitive, even capitalistic, in its orientation. There need to be numerous individual efforts towards the one goal, and not a single prize but rather a constellation of milestones arrayed in as many dimensions as we can determine, because the point isn’t just to get to the moon this time but rather to transform the way that consciousness exists in the world before it becomes entirely irrelevant.
The second question is difficult beyond reckoning, and might in fact require the use of the technology in question to fully appreciate, let alone resolve.
As far as I am concerned, at the core of this program needs to be an implicit understanding that Google’s or IBM’s or Microsoft’s or Wolfram’s artificial intelligence cannot be the primary beneficiary of advances in scalable consciousness; the beneficiaries must be us, humans. At the very least it must be us first and foremost, other extant conscious beings to whatever extent is possible and ethically sound, and any artificial consciousnesses as they arise from other research.
For myself, I look at those who are struggling for any number of reasons, and for whom the system is only going to fail more severely in days to come, and I think that we need, as soon as possible, to provide the ability to augment their capacity to equal or exceed those who occupy higher socioeconomic tiers. Fundamentally it’s a socialist proposition, but I don’t see any way around that, from an ethical standpoint. Any runaway system stops working once it can artificially produce or inflate capabilities and consumption. We’re already seeing that with wealth inequality. It’s got a long way to go yet before the dystopian future arrives, but there’s no real reason to believe it can be reversed without explicit corrective measures, including radical redistribution of resources on a scale that rivals the Stalinist epoch.
The problem is far more significant when talking about literal superhumanity. Once a Donald Trump or a Peter Thiel can exponentially augment themselves, there’s no hope for the rest of us. On a human timescale, things get out of hand too fast.
At the same time, it does not escape my attention that a sizeable portion of farming (and thus, the provision of necessities) in North America is done by people whose lifestyle has not changed significantly with the advent of technology. I think the days when that fact is relevant are numbered; still, we must appreciate that necessities may remain necessary, and that those who choose to deal in them may not take advantage of new technologies the way that some of us do.
In reality, any system that includes superhuman capabilities needs to have room for those who for whatever reason do not acquire those capabilities. Diversity of experience is a pillar of stability in equal-peer systems. That may hold true even when the equality is not preserved; certainly the ethical dimensions seem certain to remain intact – indeed, as the rise of vat meat shows, ethics may take on new significance with advances in techology.
The only corrective there is to ensure that supports are available to those who most need them as early as possible, and to plan for steady growth in the allotment of capacity to individuals. This becomes hard at economic scales, but that’s really the problem that socialism always has to grapple with. In this case, I don’t think that individualistic models like capitalism hold enough value to ignore their serious deficits.
This is over a thousand words now, so I’m going to end with that thought. I’ll probably talk more about this in another post, maybe addressing the things that are already happening in the world at large that matter in this particular arena. In the meantime, I hope you’ll have a little look-see yourself.