What we're about
Upcoming events (2)
Delving a bit deeper than part one on April 7, Can a machine have 'TRUE' agency and if an intention is held must there be necessary ongoing awareness of that intention (or 'value" )? □●●●□ Obviously with more than a passing reference to Gilbert Ryles famous concept of The Ghost in the Machine At our first discussion on intentionality we discused possible mechanisms in the brain that could account for intentional brain states and that despite claims by Dennett current artificial devices could never hold such states. Wanting to investigate further we looked at claims made by Edelman's team about their Darwin series of brain based devices having 'values" - at least. These values could maybe ground claims of intentionality. The idea is to get insight into whether a true (qualitative) value system is possible even in principle in an artificial device and if so by which principles of operation. Or else move away from Strong AI and try to see how a 'wet machine' like a brain can handle it. ●●●●●● Here is a 5min video of Edelman discussing the concept of "value" in an artificial device: https://youtu.be/8Q016H1TSL4 And here is a short video of Jeffrey Krichmar (the contact we are working on) discussing an early 'Darwin' device and a fundamental artificial 'value': https://youtu.be/J7Uh9phc1Ow ●●●●● On considering the possibility of Agency in an artificial device bear in mind the following entry from Stanford Encyclopedia of Philosophy; Finally, we turn briefly to the question of whether robots and other systems of artificial intelligence are capable of agency. If one presumes the standard theory, one faces the question of whether it is appropriate to attribute mental states to artificial systems (see section 2.4). If one takes an instrumentalist stance (Dennett 1987: Ch. 2), there is no obvious obstacle to the attribution of mental states and intentional agency to artificial systems. According to realist positions, however, it is far from obvious whether or not this is justified, because it is far from obvious whether or not artificial systems have internal states that ground the ascription of representational mental states. If artificial systems are not capable of intentional agency, as construed by the standard theory, they may still be capable of some more basic kind of agency. According to Barandiaran et al. (2009), minimal agency does not require the possession of mental states. It requires, rather, the adaptive regulation of the agent’s coupling with the environment and metabolic self-maintenance. This means, though, that on this view artificial systems are not even capable of minimal agency: “being specific about the requirements for agency has told us a lot about how much is still needed for the development of artificial forms of agency” (Barandiaran et al. 2009: 382). ●●●●● More derail later. ●●●●●
This 'event' won't have a set date (not yet at least) but is a location to list down some ideas of what members are really interested in. I will add a couple of my own deepish ideas but do not wan't to dominate if others have common themes of interest. So just mention what you think will be cool. One ting I am interested in is in taking a concept from philosophy of mind and seeing how it may relate to neuroscience research; In this domain a recurring item is Daniel Dennett's comments regarding 'illusion' and how it relates to 'the argument from illusion' in the opposition in the philosophy of mind to the view called 'naive realism' ' We could discuss these issues and watch a short video from Dennett where he concludes that a generated pink image is 'just an illusion' (having proved that it is in fact 'there'). But then he stops short and says no more, leading many to say the title of his book should not be 'Consciousness Explained' but 'Consciousness Ignored'. Can we discuss possible ways of moving beyond the impasse he has created - I have a few putative ideas myself, what about you. So do say what you think about this and list down some other conundrums you think we could work on.