The Bay Area Collective Intelligence Meetup Group Message Board › Human moves mouse to control realtime generated music which artificial intel

Human moves mouse to control realtime generated music which artificial intelligence (AI) changes to try to control Human's hand on mouse. AI translates music problems to programming problems, organizing Humans unconsciously through internet to build

A former member
Post #: 2
Audivolv is open-source GNU GPL 2+, is 100% Java, contains only 30 kilobytes of very effective AI code, and is the only Artificial-Intelligence that creates musical-instruments you play with the mouse, learns how you want them to sound, and automatically creates and uses new software to make them sound more that way.

My plan, which will take lots of years, is to carefully build it into a Friendly AI and a Collective Intelligence made of millions of billions of people playing mouse-music across the internet. By using only mouse movements and computer speakers, I should be able to get some of the same effect as direct brain to brain communication because music and dancing-like-mouse-movemnts are very close to people's unconscious minds.

Audivolv works immediately without installing. Sets up all options automatically.
If you are not evolving audio 10 seconds after it starts, please write on the bug list.
It starts as whitenoise like radio-static, then you teach it what music is with your mouse.

To run it, simply run this file:­
and if it does not work, try installing­

What does Audivolv do now?

* Starts with 1 click, assuming you have Java.
* Sound reacts instantly to mouse movement.
* You move the mouse any way you like and think about how that should sound.
* You choose "Sounds Good" or "Sounds Bad".
* Audivolv tries to learn what you were thinking.
* Repeat until your musical thoughts become real.
* Close Audivolv and start over if you confuse it too much.
* Audivolv is a Java software that creates Java software to change things about itself as it learns
* Learns techno music faster than any other type, but what did you expect an AI to play first?
* There is not 1 line of music code in Audivolv until you teach it what music is. Then it writes the code.
* Find all new musical instruments in the "Audivolving" folder it creates, and copy/paste whats in those files into the "Create Musical Instrument" tab in the options, to play them again.

What will Audivolv do years from now?

* The same thing it does now but it learns faster, and will learn what you like with no good/bad buttons.
* Play all types of music and create new types.
* Given examples of the best known AI algorithms (bayesian net, neural net, evolution, etc) rewritten in Audivolv's simpler code and data format, Audivolv will evolve an intelligent way to design and use new AI algorithms in that same format.
* Audivolv evolves a way to intelligently design AI brainwaves and data structures it will flow through and modify.
* Audivolv designs a different music-and-mouse language between AI and each person who plays music with their mouse.
* A new kind of communication: Audivolv translates between those music-and-mouse languages (no text) between millions of people across the Internet.

THEORETICALLY, AND I MEAN VERY THEORETICALLY, what could Audivolv possibly do years from now?

* Experiment with, and design Intelligence Amplification and Coherent Extrapolated Volition algorithms.
* Having a complete path between the mouse, speakers, and psychology of each person using the "Audivolv Network", and limited to the bandwidth of mouse movements and speakers, any 2 people should be able to communicate thoughts directly and more efficiently than talking or typing on a keyboard.
* The first Artificial General Intelligence (AGI), still running only in Audivolv's code and data format, communicating to other Audivolvs on the Internet, but each Audivolv rejects all dangerous code it receives by using its code-string-firewall.
* It will look almost the same as Audivolv does today, and you will still use it by playing music with the mouse, but with advanced realtime psychology, communicates 3d shapes and other information, similar to how you "hear the shape of a drum" ( http://en.wikipedia.o...­ a_drum ). Today we think searching videos by patterns of color (instead of text) is advanced. Instead of text, video, or sound, you would search the "Audivolv Network" for ideas. Just think your search query while playing mouse-music however feels right at the time.

IN EXTREMELY TECHNICAL WORDS, MY LONG-TERM THEORETICAL GOAL FOR EACH AUDIVOLV PER COMPUTER: efficiently unify connectionist AI (the obs[] part of a audivolv.Func) (any recursive permutations of linear and exponential types) with evolution (the obs[] part of a audivolv.Func) and hypercube vector fields (the flos[] part of a audivolv.Func or any flo numbers recursively in permutations of linear and exponential connectionist array networks), represented in the same code and provably-predictable data format (arrays of arrays of arrays... allowing cycles and leaf nodes like audivolv.Func or numbers in hypercube range, with other constraints), code-string-firewalled for safety, so generated AI softwares can use other generated AI softwares as tools, starting from a few example AIs like bayesian (a type of exponential connectionist AI) and evolution, to play better mouse-music (a hypercube vector field that includes 1 dimension for each speaker and mouse x and y position).

To an AI, mouse-music would look like an AI changing the data described above. To a person, mouse-music would sound like theres a person on the other end of the mouse helping with the music. A common language between people and AI, to use AI as a person emulator and people as an AI emulator. Its much easier to do both of those things together than only 1. Audivolv's long name is in quotes: "Audivolv - where Instruments Play The Musicians" and Musicians Play The Instruments, a theory of recursive intelligence, where Human thoughts and AI thoughts could be used interchangably in the same recursion, using mouse-music as "Human interface code" instead of normal "Human interface code" like windows, text, buttons, and menus. I know brains run much slower and more parallel than computers. There can be many speeds of recursion, some allowing more time to think at each step, some slow enough to flow across a global AI network, and others fast enough to process realtime audio. Recursive intelligence can be slow or fast, blurry or specific, and any combination of those, in theory.

There are some neural-networks that use the timing and gradual activation-levels of each node in the intelligence.

I'm planning something more advanced: Instead of using it as "time" and taking a few "activation levels" each time it fires, I'll process only a few neural-nodes at a time and do the neural-spiking as 44.1 khz audio, the same sound-quality you get on a CD. You could literally listen to the neural-spiking of nodes. By allowing 1000 times more accuracy, it will be possible to evolve much more advanced neural-spiking patterns in the same way it evolves music today.

The timing of neural-spiking is very important, including the timing and wave-interference as the electricity flows down the neuron's axon and branches into the 1000-10000 child neurons.

For more detailed design plans, see the "ToDo Summary.txt" file inside the newest Audivolv Jar file.
Powered by mvnForum

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy