Página InicialGruposDiscussãoMaisZeitgeist
Pesquisar O Sítio Web
Este sítio web usa «cookies» para fornecer os seus serviços, para melhorar o desempenho, para analítica e (se não estiver autenticado) para publicidade. Ao usar o LibraryThing está a reconhecer que leu e compreende os nossos Termos de Serviço e Política de Privacidade. A sua utilização deste sítio e serviços está sujeita a essas políticas e termos.

Resultados dos Livros Google

Carregue numa fotografia para ir para os Livros Google.

The Character of Consciousness (Philosophy…
A carregar...

The Character of Consciousness (Philosophy of Mind) (original 2010; edição 2010)

por David J. Chalmers (Autor)

MembrosCríticasPopularidadeAvaliação médiaDiscussões
1071254,113 (3.86)Nenhum(a)
What is consciousness? How does the subjective character of consciousness fit into an objective world? How can there be a science of consciousness? In this sequel to his groundbreaking and controversial The Conscious Mind, David Chalmers develops a unified framework that addresses these questions and many others. Starting with a statement of the ""hard problem"" of consciousness, Chalmers builds a positive framework for the science of consciousness and a nonreductive vision of the metaphysics of consciousness. He replies to many critics of The Conscious Mind, and then develops a positive theor… (mais)
Membro:eriksays
Título:The Character of Consciousness (Philosophy of Mind)
Autores:David J. Chalmers (Autor)
Informação:Oxford University Press, USA (2010), Edition: 1, 624 pages
Coleções:A sua biblioteca
Avaliação:
Etiquetas:Nenhum(a)

Informação Sobre a Obra

The Character of Consciousness por David J. Chalmers (2010)

Nenhum(a)
A carregar...

Adira ao LibraryThing para descobrir se irá gostar deste livro.

Ainda não há conversas na Discussão sobre este livro.

(Original Review, 2010-10-30)

Is the assumption that brains are "just magic" - unlike kidneys or spleens or bones correct? This elevation of "consciousness" to an almost dualistic status is irritating beyond belief, and seems to stem (pardon the pun) from the fact that brains are hellishly complicated and difficult to measure (difficult, but becoming easier).

Philosophers have proven USELESS at answering questions, but particularly useFUL at asking the wrong ones. We never did get a straight answer as to how many angels could dance on the point of a needle (or head of a pin depending on your source, it matters not). If I have learnt anything from my experience as a scientist, it is that sometimes, if you ask a stupid question, you get a stupid answer, and so continuing to ask the stupid question in the hope that the answer will become sensible is actually not very bright. "What is it like to be a bat?" Hmm, not sure. What's it like to be another human being? Since our brains - and more to the point our entire nervous systems - wire themselves uniquely, it would be hard to tell. This is the scientific equivalent of Bilbo's challenge to Gollum in The Hobbit: "What have I got in my pocket?" It's a stupid question, no matter how interesting the answer might be. Actually, since there are some blind humans who have learnt to echo-locate [2018 EDIT: And when, inevitably, technology brings us "Google Sonic Glasses" that connect directly to the brain, we can partly answer the question.]

Our brains are built to simulate an approximation of the world, because in being able to predict the world, our survival is more likely. It stands to reason that if we have a visual sense to detect objects, then part of that simulation will be what we refer to as sight, and if it updates in near real-time then it will immediately become "an experience". Add to that mix the multiple streams of information being centrally routed, and an algorithm to pick the important ones to respond to - thus leading to an ever-shifting spotlight of attention - and we understand broadly why we experience what we do and how. “The Hard Problem” is just another name for dualism or animism or vitalism or what I scathingly refer to as "Magic Pixies", a desire to make humans supernatural, rather than see us as what we are: complex, adaptive, resourceful.

There are good evolutionary reasons why sensation would be referred to a point, a locus of interaction with the world. There are good reasons for extrapolating behaviour into the future, rather than simply reacting to sensation. I would not be surprised for the interaction of sensation and extrapolation, memory, reflex and learning to coalesce in a sense of self: it is important to recognise the difference of self and non-self, and we know that the distinction can be impaired in illness and in illusions. There isn't one hard problem, there is consciousness emerging from individually soluble neurophysiological problems.

I suspect the question of why we're not "just brilliant robots, capable of retaining information, of responding to noises and smells and hot saucepans, but dark inside, lacking an inner life?" should be turned around. Man-made computers are becoming more sophisticated all the time, and it is probably only a matter of time before computers/robots can think and feel like us, or, indeed, in ways vastly superior to us. This theme is already completely out there (and has been for decades) in the world of science fiction.

We are clearly still a long way from answering all the "easy questions" (a few of which are cited in Chalmer's book) that are pertinent to the human brain, and I don't know how hard it would be to make a computer that modelled the thought processes of a human brain (perhaps partly because current computers use basic mechanisms such as logic gates that have somewhat different physical properties to those of neurons and synapses etc.). However, if these two things could be done (and they can both be classed as "easy questions" in the terms of Chalmer's take), we would, I am sure, have made a conscious machine reassembling a human brain, and the so called "hard question" of the basis of consciousness would simply disappear. Artificial consciousness simply depends on a level of complexity which man-made computers have yet to reach. Consciousness is surely dependent on biological entities for its origin, but not necessarily for its continuation. I know this Singularity stuff is quite hip, and popular in some circles, it strikes me as complete nonsense. Computers don't feel. Current "AI" can accomplish some tasks now that are really easy to humans, but they don't do it in the same way as us, and even where it "learns" it simply runs a series of calculations. Even if future AI could seems to us to be conscious, it will still be a simulacrum, just a really good one. It won't be alive and it won't be self-aware. I think the whole concept of The Singularity is based upon the premise that sufficiently complex technology is indistinguishable from magic to most people. But that is a failure of individuals to grasp its complexity, not that there is really "magic" going on… ( )
  antao | Dec 21, 2018 |
sem críticas | adicionar uma crítica

Pertence à Série da Editora

Tem de autenticar-se para poder editar dados do Conhecimento Comum.
Para mais ajuda veja a página de ajuda do Conhecimento Comum.
Título canónico
Informação do Conhecimento Comum em inglês. Edite para a localizar na sua língua.
Título original
Títulos alternativos
Data da publicação original
Pessoas/Personagens
Informação do Conhecimento Comum em inglês. Edite para a localizar na sua língua.
Locais importantes
Acontecimentos importantes
Filmes relacionados
Epígrafe
Dedicatória
Primeiras palavras
Informação do Conhecimento Comum em inglês. Edite para a localizar na sua língua.
INTRODUCTION

What is consciousness? How can it be explained? Can there be a science of consciousness? What is the neural basis of consciousness? What is the place
of consciousness in nature? Is consciousness physical or nonphysical? How do we know about consciousness? How do we think about consciousness? What are the contents of consciousness? How does consciousness relate to
the external world? What is the unity of consciousness?
Part I: THE PROBLEMS OF CONSCIOUSNESS

Consciousness poses the most baffling problems in the science of the mind. There is nothing that we know more intimately than conscious experience, but there is nothing that is harder to explain.
Citações
Últimas palavras
Nota de desambiguação
Editores da Editora
Autores de citações elogiosas (normalmente na contracapa do livro)
Língua original
Informação do Conhecimento Comum em inglês. Edite para a localizar na sua língua.
DDC/MDS canónico
LCC Canónico

Referências a esta obra em recursos externos.

Wikipédia em inglês (1)

What is consciousness? How does the subjective character of consciousness fit into an objective world? How can there be a science of consciousness? In this sequel to his groundbreaking and controversial The Conscious Mind, David Chalmers develops a unified framework that addresses these questions and many others. Starting with a statement of the ""hard problem"" of consciousness, Chalmers builds a positive framework for the science of consciousness and a nonreductive vision of the metaphysics of consciousness. He replies to many critics of The Conscious Mind, and then develops a positive theor

Não foram encontradas descrições de bibliotecas.

Descrição do livro
Resumo Haiku

Current Discussions

Nenhum(a)

Capas populares

Ligações Rápidas

Avaliação

Média: (3.86)
0.5
1
1.5
2
2.5
3 3
3.5
4 2
4.5
5 2

É você?

Torne-se num Autor LibraryThing.

 

Acerca | Contacto | LibraryThing.com | Privacidade/Termos | Ajuda/Perguntas Frequentes | Blogue | Loja | APIs | TinyCat | Bibliotecas Legadas | Primeiros Críticos | Conhecimento Comum | 204,400,356 livros! | Barra de topo: Sempre visível