addressalign-toparrow-leftarrow-rightbackbellblockcalendarcameraccwcheckchevron-downchevron-leftchevron-rightchevron-small-downchevron-small-leftchevron-small-rightchevron-small-upchevron-upcircle-with-checkcircle-with-crosscircle-with-pluscrossdots-three-verticaleditemptyheartexporteye-with-lineeyefacebookfolderfullheartglobegmailgooglegroupsimageimagesinstagramlinklocation-pinm-swarmSearchmailmessagesminusmoremuplabelShape 3 + Rectangle 1outlookpersonJoin Group on CardStartprice-ribbonImported LayersImported LayersImported Layersshieldstartickettrashtriangle-downtriangle-uptwitteruseryahoo

London Futurists Message Board London Futurists General Discussion Forum › Lets sort it out once and for all - What da fak is dis singularity??

Lets sort it out once and for all - What da fak is dis singularity??

Dirk B.
user 9941666
London, GB
Post #: 244
Last first: 4.5 Trillion - or to put that in perspective, some 50% more than the Iraq War is projected to cost the USA. So, I would say that there has likely been such a return on investment in IT technology over 70 years. But then again, how would you measure such a return on investment given that only the simplest of enterprises could function these days without computers and the Net?

"Nanotechnology is also failing in its promise and is pretty slow in progress and genetics which is also moving pretty fast is certainly not doubling in capabilitys equal to moores law - ..."­
"Sequencing the first human genome cost $3 billion--and it wasn't actually the genome of a single individual but a composite "everyman" assembled from the DNA of several volunteers. If personalized medicine is to reach its full potential, doctors and researchers need the ability to study an individual's genome without spending an astronomical sum. Fortunately, sequencing costs have plummeted in the last few years, and now the race is on to see who can deliver the first $1,000 genome--cheap enough to put the cost of sequencing all of an individual's DNA on a par with many routine medical tests. "

The tendency of prediction is to overhype the short term and underhype the long. So the Human Genome Project was initially overhyped. However, to illustrate the progress being made consider this recent report:­

"Small conditional RNAs selectively kill cancer cells. In lab-grown human brain, prostate and bone cancer cells, small conditional RNAs (light and dark blue) bind to a targeted RNA cancer mutation (orange and green), triggering self-assembly of a long double-stranded RNA polymer that activates an innate immune response (gray turns to red) leading to cell death. No measurable reduction in numbers is observed for cells lacking targeted cancer mutations. Image courtesy of Suvir Venkataraman, William M. Clemons, Jr. and Niles A. Pierce (Caltech)
But what if we had cancer treatments that worked more like a computer program, which can perform actions based on conditional statements? Then, a treatment would kill a cell if --and only if-- the cell had been diagnosed with a mutation. Only the defective cells would be destroyed, virtually eliminating unwanted side effects.
With support from the National Science Foundation (NSF), researchers at the California Institute of Technology have created conditional small RNA molecules to perform this task. Their strategy uses characteristics that are built into our DNA and RNA to separate the diagnosis and treatment steps.
The molecules are able to detect a mutation within a cancer cell, and then change conformation to activate a therapeutic response in the cancer cell, while remaining inactive in cells that lack the cancer mutation," claims Niles Pierce, co-author of a recent study which appears in the September 6 issue of Proceedings of the National Academy of Sciences (PNAS). "

There are similar reports of using nanoparticles coated with antigens that lock onto cancer cells and then deliver a toxin. As for nanotech generally, there is a huge amount of work being reported on self assembling systems for use in future electronics, mostly concerning carbon. Applications are of course a few years away.

I would say that both biotecxh and nanotech are following an exponential curve, but we are at the beginning of the slope, not the middle. With both we are about where we were with electronics around 1970. If so, and it follows a similar trajectory, the big stuff ought to start changing the world in 20-30 years.

A former member
Post #: 514
I could agree with that Dirk - what you say about biotech and nanotech actually being at the bottom of there curve of fast innovation. Lets use the early history Nanotech as a example -

Feynman gives after-dinner talk describing molecular machines building with atomic precision (concept)
Taniguchi uses term "nano-technology" in paper on ion-sputter machining
Drexler originates molecular nanotechnology concepts at MIT
First technical paper on molecular engineering to build with atomic precision
STM invented
Buckyball discovered
First book published
AFM invented
First organization formed

I would say its pretty accurate to say that nanotech is still pretty much in its infancy - its only been around 25 years in a sense a baby technoilogy compared to a grandaddy like computing which is coming up to 200 years old. - the bold claims made by the nanotechnologists (and their enthusiasts) remind me of the similary bold claims made by the computer scientists in the late 1940's and 50's and the radio scientists in the generation previous to that. There is no doubt that nanotechnology is going to get better - the uncertaintity is how many promises can be fulfilled. Some nanotech projects like making nanobots which have almost "magical" powers may simply prove impossible and we have to make do with washing powders and extra-strong tennis raquets. Even my Dream of the Beanstalk to space may not be possible - The 2010 tether competition this year was a massive dissapointment
One item of note; after the third or fourth tether entry was tested, Yuri Artsutanov (who was there to witness the competition) offered his tie as an entry…
:( - can;t even make a tether that doesnt snap yet

Back to computers i found this: Richard Dawkins went into this in more detail here: http://www.accelerati...­ (Video)
He also says some rational things about "godlike machines" and thepossibility of machines evolving intelligence from a evolutionary biology POV.

The war connection you brought up is also very interesting - being as it was mainly military contracts which funded microchip manufacture in the early days - its estimated that 40% of silcon valleys sales were military in the 1950's and 1960's. In fact if it hadn't been for the 2nd world war digital computers might not have been developed then at all - prehaps if there hadn;t been such a big need to break the german war machine radar capabilitys/secret codes we might still be on a completely different path of technology - see this video - its a bit long (63 mins) but incredibly interesting:­

I don't buy that the only simplest of industrys can function without the internet and computers - I think thats a lopsided view again based around marketing not actuallity. My industry (catering) barely uses the internet at all, i only use my laptop in work to print menus and stuff. But thats just me (being a enthusiast for technology), I could just as easy not use one at all and just handwrite its not essential. I would say most catering companies don't really use computers at all and often they are just to supply Wifi access to customers - the catering staff have no use for them and often complain that using them is a hinderence on their time, you can;t run a kitchen on a spreadsheet. Some catering organisations which are what you call "automated" have the problem that most people don't want to have to eat mass produced fast food all the time - in fact most people are under the belief that to do so will send you to a early grave! Wetherspoons pub group is a good example - ok they do fantastically cheap beer and food but you know the quality is rubbish compared to going to a independantly run pub which prides itself on fresh food and quality beers.
There are lots of other industrys (Ok admittedly mainly traditional industrys) which thrived before and after the advent of computing/internet. Its a bit like watch manufacture - a digital watch keeps much better time than a traditional watch but poeple still buy expensive clockwork watchs in preference.

Cool thread - keep it going this is the best one we have had for a while and Im on Holiday so I can actually reply to posts at a decent time of day!
Dirk B.
user 9941666
London, GB
Post #: 246
Various points, in no order.

Industries needing computers - anything that requires typing, calculation, accounts, tax returns beyond a single sheet of paper. You are probably too young, but when I was a kid every biggish company had a "typing pool" filled with (mostly) women who would... type. Of course, you could rehire them but at a guess the payroll costs would probably jump by at least 25%

WW2 - You are probably right. Without that and the Cold War the computing industry would like only now be at the Sinclair Spectrum level ie around 1980. We see the same effect in spaceflight, but it has now (IMO) been "corrected" historically ie we are looking at commercial space travel, a real space station and a moon landing around 2020.

BTW - I really like Wetherspoons! Burger and pint for under a fiver!

I would rate the start of nanotech at 1986 with Drexler's book, Engines of Creation.
However, much of "real" nanotech is dependent on semiconductor fabrication machinery circa 1980 and beyond. As for biotech, a lot of that is still computer limited eg protein folding simulation stuff etc.
A former member
Post #: 517
enjoy your burger - personally i think companys like weatherspoons are killing the pub trade with uncompetitive practises and the wages they pay are the worst - most cooking staff get about £6 an hour - but thats a subject for another forum...

Back to the singularity - heres a argument from Bob Seidensticker (who who debunks the singularity and exponetial technology growth in his book: Futurehype

The further backward you look,
the further forward you can see.
-- Winston Churchill

The game of chess dates back to India 1400 years ago. Legend says that the local ruler was so delighted by the game that he offered its inventor the reward of his choice. The inventor's request was defined by the game board itself: a single grain of rice for the first chess square, two for the next, four for the next and so on, doubling with each square through all 64. Unaccustomed to this kind of sequence, the ruler granted this seemingly trivial request. But though the initial numbers are small, the amount builds quickly. The rice begins to be measured in cups by square 14, sacks by square 20, and tons by square 26. The total comes to about 300 billion tons--more rice than has been harvested in the history of humanity.

Like the king in the chess story, most of us are inexperienced with repeated doublings such as these. Let's look at a present day example. In 1971, Intel introduced the 4004, its first microprocessor, with a performance of 0.06 MIPS (million instructions per second). Intel's Pentium Pro was introduced in 1995 with 300 MIPS, a 5000-fold performance increase in 24 years--about one doubling every two years. A car making the same speed increase would now have a top speed of about Mach 700. Give it another 24 years at the same rate of increase, and its top speed would exceed the speed of light.

Moore's Law, named after Intel cofounder Gordon Moore, predicts this exponential rise in computer performance. Every two years, microprocessor speed doubles. Again. This law has been startlingly accurate for three decades, unlike many other extrapolations, and the progress it predicts is expected to continue, at least for the near future.

There is no precedent for this rapid performance improvement, which is why we view computers and their rapid change with wonder.

My own career of 25 years as an engineer has been tied to the effects of Moore's Law, both as a digital hardware designer and as a programmer and software architect. Ever since high school in the 1970s, I've been immersed in computer technology and have been an energetic cheerleader of technology in general. I was in awe of the change computers caused and was delighted to be a small part of that change. Change was exciting. And it was all around us--I grew up with the space program and jumbo jets, nuclear power and skyscrapers, Future Shock and Megatrends. Exponential change seemed to be everywhere we looked.

To make sure we're all clear what exponential change looks like, this chart contrasts no change and linear change with exponential change. The vertical axis is unlabeled--it could represent transistors if we're measuring microprocessors, dollars for compound interest, the number of bacteria for growth in a Petri dish, or the grains of rice in the chess story. While they may start out slowly, exponential curves eventually snowball.

As I gained experience, I came to realize that change for its own sake wasn't as fun or desirable for the software user as imagined by the software developer. Curiously, users wanted new software to answer to bottom-line needs. Who would have guessed? Coolness alone wasn't enough--users demanded that software pull its weight, as they would for any other purchase.

They were right, of course. New software must provide sufficient additional benefits to outweigh the cost and aggravation of adopting it. This is also true for other consumer products. The consumer might think: I like that digital camera, but it uses a new type of memory card. Will it become a standard or an unsupported dead end, like so many other products? Should I make MP3 copies of my favorite songs or keep them on CD? Is HDTV really here, or is the current hype another false alarm? In general, is the latest hot product something that will last, or is it just a fad? The early adopters are quick to make this leap, but the chasm must be narrowed considerably for the majority of us. Change for its own sake wasn't as delightful as I'd thought, and I came to see things more from the user's perspective.

A former member
Post #: 518

As I gained experience, I came to realize that change for its own sake wasn't as fun or desirable for the software user as imagined by the software developer. Curiously, users wanted new software to answer to bottom-line needs. Who would have guessed? Coolness alone wasn't enough--users demanded that software pull its weight, as they would for any other purchase.

They were right, of course. New software must provide sufficient additional benefits to outweigh the cost and aggravation of adopting it. This is also true for other consumer products. The consumer might think: I like that digital camera, but it uses a new type of memory card. Will it become a standard or an unsupported dead end, like so many other products? Should I make MP3 copies of my favorite songs or keep them on CD? Is HDTV really here, or is the current hype another false alarm? In general, is the latest hot product something that will last, or is it just a fad? The early adopters are quick to make this leap, but the chasm must be narrowed considerably for the majority of us. Change for its own sake wasn't as delightful as I'd thought, and I came to see things more from the user's perspective.

The high failure rate of new products challenges the inevitability of exponential change. A bigger challenge came as I studied high tech products from the past, looking for precedents against which to compare my own projects. Why were these old products successful, and how could I adopt their lessons for my own work? But something unexpected happened. As I learned more about the history of technology, examples emerged that the exponential model could not explain. I gradually realized that there was a different way--a more accurate way--to look at technology change.

The exponential model as a universal explanation for and predictor of technology change is at best an approximation and at worst a delusion. We can support it only by selecting just the right examples and ignoring the rest. Technology does not always continuously improve. For example, commercial airplane speeds increased steadily but halted when airlines realized that expensive supersonic travel didn't make business sense. Highway speed limits increased steadily but also hit a ceiling. Record building heights increased rapidly during the first third of the twentieth century but have increased only moderately since then. Use of nuclear power has peaked, and manned space exploration halted after we reached the moon.

Different areas of technology advance at different rates and come to the fore at different times. Cathedral building emerged during the 1200s while other technologies languished. Printing created dramatic change in the late 1400s. It surged again in the early 1800s as mechanized presses provided cheap books and magazines. Steam power and mills had their heyday; later, it was electricity and electrical devices. There are dozens of examples of a technology surging forward and then maturing and fading back into the commonplace.

Perhaps the most venerable use of the exponential model has been to represent world population growth. But even here it's an imperfect metaphor. In the 1960s and '70s, experts warned that world population was growing exponentially and the crowding was quickly getting worse. Famine was just around the corner. The exponential model was a dramatic way to make a point but an inaccurate one. World population growth is slowing and is expected to peak mid-century, and dozens of countries are already falling in population (ignoring immigration).

Despite the common perception, the impact of technology on society today is comparatively gentle. To see a truly serious example of the collision of technology and society, look at Britain during the Industrial Revolution almost two centuries ago. In 1811, armed gangs of Luddites smashed the textile machines that displaced their handmade crafts. Several years and over 10,000 men were required to put down the rebellion. The unrest spread to the Continent, where the word "sabotage" was coined. It comes from the French word sabot, the name for the wooden shoes used by workers to smash or jam machines. In the space of a generation, independent work on farms had given way to long six-day weeks in noisy and dangerous factories. Our own technology growing pains seem minor by comparison.

It's easy to focus on the recent at the expense of the old. But doing so leads to a distorted view of modern technology. New products loom disproportionately large in our minds, simply because they're new. The image of Americans a few generations ago living quiet, static lives is fiction. Societies of the past dealt with disruptions from technology every bit as challenging and exciting as our own: the telegraph and electricity, the car and railroad, anesthesia and vaccines, concrete and steel, newspapers and mail. And then we have the fundamental developments of antiquity on which society is based: agriculture, metallurgy, the beginnings of engineering, writing, textiles, transportation, timekeeping, basic tools and weapons, and so on. Are today's products really so amazing compared to those on which they were built? Too often we mistake a new technology for an important one.

Part of the problem is a narrow definition of technology. Obviously, the Internet, computer, and cell phone fit into this category. These are in the news and in our awareness. But this book will use a very broad definition of technology, including these new technologies as well as older and less glamorous ones mentioned above. Metallurgy, textiles, and all the rest were high tech at one point, they are still important to society, and examples from these older technologies will be liberally used in this book to illustrate that today's issues with technology have been around for a long time.

The prevailing view of reality is often an oversimplification. For example, a simple rule often taught to small children is "All ocean creatures are fish." Though incomplete, it's a step in the right direction. When the children are a little older, we might teach them, "All ocean creatures are fish--except whales and dolphins." When they are older still, we teach them "All ocean creatures are fish except marine mammals (like whales and dolphins), crustaceans (like crabs and lobsters), bivalves (like oysters and scallops), cephalopods (like nautilus and squid), ..." and so on.

We frequently hear that today's technology change is unprecedented. But like the fish simplification for children, this tells far less than the whole story. The characterization that today's change is unprecedented is easy to understand and helps explain some of what we see, but it's inaccurate--and dangerously so. You have outgrown the children's version and are ready for a grown-up look at technology.

more on this point of view here: http://www.future-hyp...­
Dirk B.
user 9941666
London, GB
Post #: 250
All of that is saying nothing that has not been mentioned elsewhere in the thread.
Exponential growth limits are almost entirely determined by how far we are away from theoretical limits coupled with our resources.
With cars that is around a factor of 5, with computers a factor of (at least) 100 million.
Then there is the additional factor that exponential growth in information processing has potentially far greater impact than any other tech. Information is the third universal "element" after matter and energy.
A former member
Post #: 521
aye it was nice to have come to similar conclusions as somebody who also looked at technology historically - I still havent read his (Bruce S.) book (Bloody amazon). I just decided to look at technology as a historian might as he suggested in this lecture: http://research.micro...­

Anyway to go back to the point - its interesting that no-one (not even its enthusiasts) can really agree on what the nature of the Technological Singularity even is.

1. We know it can't be because technology gets faster and faster because that simply insn't true - just a a few days researching at the libary shows that idea is largely marketing hype.
2. Nanotech is very much in its infancy and making bold claims about what it may or may not be capable of is more hype - some proposed ideas may not even be possible (Damn you laws of physics - the ultimate bubble burster)
3. We are going through a internet revolution but it still hasn't given us anything really new - just made doing certian things a lot faster and given idiots like me a platform to shout their crazy ideas which they would not have had before.
4. The idea that we can build a AI that is smarter than us which can then go on and build its upgrade and so on - well seems the most possible but we have been listening to the promise of AI for over 60 years and its still just around the corner...
5. Computing processing power does not double every 18 months but the number of transistors on a chip does double - Computer processing does not measure anything but speed of calculating something that is not intelligence thats just speed - there are some problems that can;t be solved by brute force calculations sometimes you just can;t build the software
6. Humans improving by incorporating technology - is again very long way off we can barely make replacement body parts let alone devices which are a improvement on what we are born with. Cochlear implants are a good example, contary to popular belief you don't "hear" you have to learn to interpret the signals which the cochlear implant gives - A lot of Deaf people remove them even if they get implanted at a young age - its a big mistake to think that a disability makes somebody less than a human and I think the same goes for hypothesised improved humans thinking they are more than human.
7. Intelligence explosion - Again this is subjective how do you measure intelligence? Is intelligence just being able to manipulate technology? Is Wayne Rooney a genius because he is a expert football player? When we have a definition of what intelligence is prehaps we can understand how it may be improved.
8. Humans get scanned into a computer - this is weird its just like thinking that humans have a soul and it can be digitalised (Neither of which I buy). Even if it was possible to make a prefect digital representation of you it would not be you, so why would you choose let your digital representation be you and live your life in virtual reality. Its like deciding that you want to die or hide inside your technology? That seems to be a very strange choice and in a way quite sad like giving up on existance.

Dirk B.
user 9941666
London, GB
Post #: 252
I think the critical element of the singularity is amplified intelligence creating "things" (objects, technology, processes etc) that cannot be understood by even the brightest Human in existence today. The simplest example would be exascale invention machines running genetic algorithms to produce new technology and scientific theories (along with tests and predictions).
A former member
Post #: 528
Prehaps - prehaps not -

Dirk - im taking a rest - David made a good point on another thread that i am just being conteptuously negative on a lot of things at the moment. At first i rose to it - but as much as i hate him ;) - he is right and he knows me reasonably well enough to judge that i am getting cranky and that is a path i donl't want to go down. I think i am going to concentrate on my cryonics whistleblowing for a while and then come back later when i can be more objective and rational. Not everything about futurism is negative/marketing ploy/cryonics scam and i could do with a break so I remember that.

Cya soon pal - sorry if i been a bit OTT - but you know what im like - bloody chefs are all the same its how we survive by being OTT!
London, GB
Post #: 219
When Richie notes that others have said:
"the amount of information is doubling every few years and increasing at a far than has ever been previously imaginable" or "there are more scientists and technologists alive and working than in all the pervious history of civilisation".

This is completely true, but what the original claimants failed to tell him is that this is also a truism, because the population is expanding. And, put very simply, with a larger population, we're bound to have more Scientists and Technologists... But we'll also have more Toilet cleaners, Shop assistants and Musicians - However this doesn't mean saxophones are suddenly going to become self aware, any more than computers will.

On a slightly more complicated note, we've got more Technologists simply because that career didn't exist more than a couple of decades back, so chances are that every Technologist that ever lived is still alive, and as more people declare themselves to be Technologists, that number will only increase... big deal! It's very similar with Scientists, yes, they've been around much longer, but look back a little further than a century and the typical Scientist was the classic Gentleman scholar - someone that came from a wealthy family and tinkered with his experiments as a hobby. This means that as well as being a little bit clever, they also have to fund their own research... difficult at the best of times. Nowadays, however, opportunities to train in a scientific field abound, there are Government and private grants available and it's a lot easier to be a full-time Scientist - so as a result, as well as there being more Scientists in absolute terms, there are also many more of them per head of population.

No doubt this will all fly over the heads of the Transhumans, who prefer to base their version of a singularity on some random science fiction book, when, if they knew anything about English literature, they would be aware of an earlier book "This Perfect Day" by Ira Levin, which also predicted a computer controlled future... but then who wants predictions about the future from the same person that wrote Rosemary's Baby?

Powered by mvnForum

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy