Autor Tópico: Uma visão cética (sem bullshits filosóficos) da consciência  (Lida 112542 vezes)

0 Membros e 1 Visitante estão vendo este tópico.

Offline Buckaroo Banzai

  • Nível Máximo
  • *
  • Mensagens: 36.028
  • Sexo: Masculino
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1300 Online: 02 de Junho de 2017, 12:33:44 »
Isso não deve ter nada a ver com o que eu disse, que provavelmente era sobre alguma ramificação do nervo vago.

O timo vai poder comumente apresentar um grau considerável, acho que perto de absoluto, de atrofia, não devendo ser correlato com as associações físicas de sentimentalidade.

https://www.psychologytoday.com/blog/the-athletes-way/201405/how-does-the-vagus-nerve-convey-gut-instincts-the-brain

Aqui é dito inclusive que percurso do nervo vago até os intestinos é necessário para que ratos possam desfazer associações de medo. Talvez o "frio na barriga" fique "congelado" se a comunicação é perdida.

Offline Buckaroo Banzai

  • Nível Máximo
  • *
  • Mensagens: 36.028
  • Sexo: Masculino
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1301 Online: 18 de Julho de 2017, 23:05:15 »
<a href="https://www.youtube.com/v/lyu7v7nWzfo" target="_blank" class="new_win">https://www.youtube.com/v/lyu7v7nWzfo</a>

Concordo com a frase mais estrita "só mais inteligência não deverá gerar consciência", mas isso está muito longe de permitir excluir consciência artificial, bastaria que não fosse limitada a isso, mas tivesse a mesma organização funcional das biológicas. Talvez mesmo algumas mais limitadas.


Offline Buckaroo Banzai

  • Nível Máximo
  • *
  • Mensagens: 36.028
  • Sexo: Masculino
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1302 Online: 29 de Julho de 2017, 09:51:18 »
A civil servant missing most of his brain challenges our most basic theories of consciousness


...

“Any theory of consciousness has to be able to explain why a person like that, who’s missing 90% of his neurons, still exhibits normal behavior,” says Cleeremans.

...
paper: The Radical Plasticity Thesis: How the Brain Learns to be Conscious
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3110382/

Citar
http://boingboing.net/2015/07/28/man-born-with-virtually-no-b.html

Man born with "virtually no brain" has advanced math degree

The subject of this paper grew up with a normal cognitive and social life, and didn't discover his hydrocephalus -- which had all but obliterated his brain -- until he went to the doctor for an unrelated complaint.

[...]



Citar
http://www.rifters.com/crawl/?p=6116
No-brainer

... What scared me was the fact that this virtually brain-free patient had an IQ of 126. ...

... Lewin’s paper reports that one out of ten hydrocephalus cases are so extreme that cerebrospinal fluid fills 95% of the cranium. Anyone whose brain fits into the remaining 5% should be nothing short of vegetative; yet apparently, fully half have IQs over 100. (Why, here’s another example from 2007; and yet another.) Let’s call them VNBs, or “Virtual No-Brainers”.
The paper is titled “Is Your Brain Really Necessary?”, and it seems to contradict pretty much everything we think we know about neurobiology.

...


E agora, quem dizia que era mito que usamos só 10% do cérebro????????///

Os materialistas darwinistas vão ficar quietinhos...







Talvez haja um mecanismo comum entre esses casos de hidrocefalia e cérebro altamente altamente funcional, e modos particulares de estruturação de redes neurais em autistas savantes:

https://spectrumnews.org/news/excess-brain-fluid-in-infants-may-be-early-sign-of-autism/

Citar
http://www.rifters.com/crawl/?p=6116
...

Three decades after Lewin’s paper, we have “Revisiting hydrocephalus as a model to study brain resilience” by de Oliveira et al. (actually published in 2012, although I didn’t read it until last spring). It’s a “Mini Review Article”: only four pages, no new methodologies or original findings— just a bit of background, a hypothesis, a brief “Discussion” and a conclusion calling for further research. In fact, it’s not so much a review as a challenge to the neuro community to get off its ass and study this fascinating phenomenon— so that soon, hopefully, there’ll be enough new research out there warrant a real review.
The authors advocate research into “Computational models such as the small-world and scale-free network”— networks whose nodes are clustered into highly-interconnected “cliques”, while the cliques themselves are more sparsely connected one to another. De Oliveira et al suggest that they hold the secret to the resilience of the hydrocephalic brain. Such networks result in “higher dynamical complexity, lower wiring costs, and resilience to tissue insults.” This also seems reminiscent of those isolated hyper-efficient modules of autistic savants, which is unlikely to be a coincidence: networks from social to genetic to neural have all been described as “small-world”. (You might wonder— as I did— why de Oliveira et al. would credit such networks for the normal intelligence of some hydrocephalics when the same configuration is presumably ubiquitous in vegetative and normal brains as well. I can only assume they meant to suggest that small-world networking is especially well-developed among high-functioning hydrocephalics.) (In all honesty, it’s not the best-written paper I’ve ever read. Which seems to be kind of a trend on the ‘crawl lately.)
The point, though, is that under the right conditions, brain damage may paradoxically result in brain enhancement. Small-world, scale-free networking— focused, intensified, overclocked— might turbocharge a fragment of a brain into acting like the whole thing.

...

Offline Gigaview

  • Nível Máximo
  • *
  • Mensagens: 13.790
  • "Minha espada não tem partidos."
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1303 Online: 30 de Novembro de 2017, 21:26:02 »
Artigo interessante que pode contribuir bastante para as discussões do tópico.

Citar
Chasing the Rainbow: The Non-conscious Nature of Being

David A. Oakley1,2* and Peter W. Halligan2
1Division of Psychology and Language Sciences, University College London, London, United Kingdom
2School of Psychology, Cardiff University, Cardiff, United Kingdom

Despite the compelling subjective experience of executive self-control, we argue that “consciousness” contains no top-down control processes and that “consciousness” involves no executive, causal, or controlling relationship with any of the familiar psychological processes conventionally attributed to it. In our view, psychological processing and psychological products are not under the control of consciousness. In particular, we argue that all “contents of consciousness” are generated by and within non-conscious brain systems in the form of a continuous self-referential personal narrative that is not directed or influenced in any way by the “experience of consciousness.” This continuously updated personal narrative arises from selective “internal broadcasting” of outputs from non-conscious executive systems that have access to all forms of cognitive processing, sensory information, and motor control. The personal narrative provides information for storage in autobiographical memory and is underpinned by constructs of self and agency, also created in non-conscious systems. The experience of consciousness is a passive accompaniment to the non-conscious processes of internal broadcasting and the creation of the personal narrative. In this sense, personal awareness is analogous to the rainbow which accompanies physical processes in the atmosphere but exerts no influence over them. Though it is an end-product created by non-conscious executive systems, the personal narrative serves the powerful evolutionary function of enabling individuals to communicate (externally broadcast) the contents of internal broadcasting. This in turn allows recipients to generate potentially adaptive strategies, such as predicting the behavior of others and underlies the development of social and cultural structures, that promote species survival. Consequently, it is the capacity to communicate to others the contents of the personal narrative that confers an evolutionary advantage—not the experience of consciousness (personal awareness) itself.

Overview

Most of us believe that what we call “consciousness” is responsible for creating and controlling our mental processes and behavior. The traditional folk usage of the term “consciousness” arguably has two aspects: the experience of “consciousness” and the contents of “consciousness”, our thoughts, beliefs, sensations, percepts, intentions, sense of agency, memories, and emotions. Over the past 30 years, there has been a slow but growing consensus among some students of the cognitive sciences that many of the contents of “consciousness,” are formed backstage by fast, efficient non-conscious systems.

In our account, we take this argument to its logical conclusion and propose that “consciousness” although temporally congruent involves no executive, causal, or controlling relationship with any of the familiar psychological processes conventionally attributed to it. In particular, we argue that all “contents of consciousness” are generated by and within non-conscious brain systems in the form of a continuous self-referential personal narrative that is not directed or influenced in any way by the “experience of consciousness” (which we will refer to as “personal awareness”). In other words, all psychological processing and psychological products are the products of fast efficient non-conscious systems.

The misconception that has maintained the traditional conscious-executive account largely derives from the compelling, consistent temporal relationship between a psychological product, such as a thought, and conscious experience, resulting in the misattribution that the latter is causally responsible for the former. Perceiving such relationships as causal in physical and social contexts is of course helpful and important, allowing humans to interpret events in our environment, particularly when describing and understanding predictive and goal-directed actions (e.g., Blakemore et al., 2001; Woods et al., 2014). When we witness two billiard balls collide, we intuitively perceive one ball forcing the other to move in a designated direction despite simply observing a sequence of events. As Hood (2006) points out “humans are causal determinists; we cannot help but experience the world as a continuous sequence of events and outcomes.” Spatial continuity and temporal contiguity increase the likelihood that we will perceive causality (e.g., Woods et al., 2014). However, while two events can be temporally and spatially contiguous, we argue that personal awareness is qualitatively distinct and separate and as such does not exert any causal influence over the contents of the personal narrative (Halligan and Oakley, 2000; Blackmore, 2012, 2016). In other words, despite its intuitive attractiveness and folk acceptance, the ascription of executive functions or agency to “consciousness” either in part or as a whole, or to the “experience of consciousness,” we claim is a misconception.

Consequently, the focus of this paper is less concerned with explaining personal awareness, which we take as a given, but more with explaining the properties, functions, and adaptive significance of the non-consciously generated, self-referential psychological content of the personal narrative. This conceptual decoupling, we suggest, offers a more productive starting point and focus for cognitive science when exploring the origin and function of psychological processes, and the control over them which was previously attributed in large or small part to the presence of an executive “consciousness.” Moreover, we consider that it is the capacity to share the contents of the non-consciously generated personal narrative stream, rather than personal awareness per se, that confers an evolutionary advantage. The potential to share selective psychological content from the personal narrative, such as ideas and knowledge, underpins the development of socially adaptive strategies including understanding and predicting the behavior of others, and ultimately cultural evolution.

Notwithstanding the above, we have little option but to use in this article the terms “consciousness,” “experience of consciousness,” “conscious awareness,” and “contents of consciousness” (all with single quotation marks) when referring to the traditional hybrid construct that implies some functional dependency between personal awareness and the control of higher psychological processes. Ultimately by removing what we see as the mistaken attribution of executive control and agency to “conscious experience,” we hope to avoid the necessity of characterizing cognitive/psychological processes in terms of the traditional binary distinction of “conscious” vs. “unconscious.” With this in mind, we favor the use of “psychological,” as the more neutral term in relation to this distinction, in preference to “cognitive.” Similarly, we use the term “non-conscious” in preference to “unconscious,” to reflect our view that all psychological processing and processes, including those forming what we call the personal narrative, occur outside “conscious experience.” Seen in such a light, a major aspect of the “hard problem of consciousness” (the problem of trying to explain how phenomenal experiences can influence physical processes in the brain) can be avoided in that the “experience of consciousness” (personal awareness) we argue can be seen to be a real, but passive emergent property of psychological processing and not some executive process capable of animating and directing our mental states. In this respect, we favor Huxley's analogy which regarded “consciousness” as being like a steam whistle on a train—accompanying the work of the engine but having no intrinsic influence or control over it (Huxley, 1874). In summary, personal awareness is real, present, and contemporaneous with non-conscious products, but it is not causal and does not exert any influence on our psychological products. Our account does not aim to explain, the other feature of the “hard problem”—namely the question as to why we have subjective experience at all.

In addition to presenting our view of “consciousness” in more detail in this paper we will discuss some of its broader implications for cognitive neuroscience. We will also explore its relevance in relation to the social role of suggestion, its potential for understanding of processes underlying suggestion, dissociation, and related clinical conditions, as well as implications for the topics of free-will and personal responsibility. We start however, with a brief historical overview of ideas about “consciousness.”


The Rise and Fall of “Consciousness”

In 1976, Jaynes suggested that early in human evolutional history, the experience of “consciousness” was initially interpreted as external voices that commanded actions and framed perceptions and beliefs not that dissimilar from hallucinations and delusions experienced in schizophrenia. More recent folk accounts of psychological states however have accepted “consciousness” as arising from, and under the control of, the individual's “self” (Bargh and Morsella, 2008). However, as far back as the nineteenth century, the founding fathers of psychology observed that many of our mental experiences arise from processes that we are not consciously aware of (James, 1892; von Helmholtz, 1897; Wundt, 1902). The latter realization, derived in part from observation of phenomena observed in hypnosis (Bargh and Morsella, 2008), was incorporated into the writings of Charcot and Freud (Oakley, 2012). This was further reinforced by the observations of several influential psychologists at the beginning of the “cognitive revolution” (Miller, 1962) who noted that that even a cursory introspective examination of one's own “conscious awareness” quickly revealed that the products of thinking and perception were the result of non-conscious processes (Nisbett and Wilson, 1977; Halligan and Oakley, 2000).

Nevertheless over the past 60 years, cognitive psychology has retained a distinction between “automatic” mental processes—not involving “conscious awareness” and “controlled” processes that did (Miller, 1962; Nisbett and Wilson, 1977; Kihlstrom, 1987; Gazzaniga, 1988; Moscovitch and Umiltà, 1991; Halligan and Marshall, 1997; Velmans, 2000; Driver and Vuilleumier, 2001; Wegner, 2002; Pockett, 2004; Hassin et al., 2005; Frith, 2007, 2010; Earl, 2014; Frigato, 2014). The Global Workspace theory (Baars, 1988, 1997) likened “consciousness” to a working theater where psychological events created by non-conscious processes taking place behind the scenes, allowed some to enter onto the stage of “conscious awareness.”

This long standing and intuitive account of consciously mediated executive control has however been challenged, by a small but growing number of students of neuroscience (Gazzaniga, 1988, 2000; Haggard and Eimer, 1999; Halligan and Oakley, 2000; Velmans, 2000; Wegner, 2002; Gray, 2004; Pockett, 2004; Frith, 2007, 2010; Baumeister and Bargh, 2014; Frigato, 2014) who demonstrated the involvement of ever more sophisticated non-conscious systems involved in the execution and co-ordination of complex and interdependent psychological functions underlying thought, motivation, decision making, mathematical ability, and mental control in the pursuit of goals (Dijksterhuis and Aarts, 2010; Hassin, 2013).

Recognition of the pervasive adaptiveness of non-conscious systems increased further over the past 10 years (Bargh and Morsella, 2008) with non-conscious mechanisms being increasingly implicated in more complex phenomena, such as decision-making, face perception, conformity, and behavioral contagion (Hassin et al., 2005; Bargh et al., 2012), to the point where it was claimed that non-conscious systems could carry out all of the psychological activities traditionally assumed to depend on “consciousness” (Hassin, 2013). Consistent with the latter view, it has been argued that conscious control of behavior was purely illusory (Wegner, 2002). Not all researchers and theorists however agree and some form of executive role for “consciousness” systems continues to be retained or emphasized (Baumeister et al., 2011; Frith and Metzinger, 2016).

In parallel with these developments in cognitive psychology, compelling complementary evidence from cognitive neuropsychology has begun to highlight some of the fault lines between traditional accounts of “conscious” and “unconscious processes.” For example, patients with “blindsight” following damage to primary visual cortex show that actions can be guided by sensory information that they remain largely unaware of, challenging the common belief that perceptions must enter “conscious awareness” to affect or produce our actions (Weiskrantz, 1985). Similarly in cases of visual neglect where, patients can show impressive non-conscious processing for stimuli on the neglected side of their visual fields, including object identification despite lack of reported visual awareness (Marshall and Halligan, 1988; Driver and Mattingley, 1998).

Quantifying the Timing of “Conscious Awareness”

In the 1980's powerful evidence emerged where it was shown that our intentions to act (deliberately make a motor movement) occurred later than the ongoing preparatory brain activity (readiness potentials) in motor systems of the brain (Libet et al., 1983). This implied that awareness of the decision to move and preparation of that movement was produced by prior non-conscious processes with the experience of conscious intention coming too late to be the initiator of the motor act. Further evidence that timing of the readiness potential and experience of the intention to move was non-linear, suggested that the two were largely independent (Haggard and Eimer, 1999; Schlegel et al., 2013). Also, research using hypnotic suggestion to create self-initiated movements without the conscious experience of intention showed that unintended, “involuntary” movements were also preceded by readiness potentials (Schlegel et al., 2015) but that the estimated time of the movements obtained from the participant was more consistent with passive rather than with voluntary movements (Haggard et al., 2004; Lush et al., 2017).

Given the independence of readiness potentials and the experience of an intention to act, one possible conclusion is that the latter is not part of the stream of processing leading to a movement, but rather the result of a consistent (non conscious) post-hoc attribution of intentionality to any non-reflexive, self-generated action. EEG evidence investigating phantom limb movement also indicated that the experience of both positive and negative volition is generated by brain activity occurring before the movement itself (Walsh et al., 2015a).

Clearly there are processes involved in what are described by the individual as voluntary movements that are upstream of the readiness potentials, but there is no reason to assume that any of these processes are not also non-consciously produced. Overall, the evidence appears consistent with the view that preparation to move originates in non-conscious systems and that the awareness of the intention to move is experienced only if that preparation becomes part of an ongoing, non-consciously generated personal narrative.

Consistent with this is a review of evidence from studies of brain damage leading to spatial neglect, which has distinguished widespread areas of the brain capable of processing up to eight different aspects of spatial perception (such as image perception, spatial image positioning, and emotions related to the images) and two areas (anterior cingulate and precuneus-posterior cingulate) involved in access to “consciousness” (Frigato, 2014). This suggests that brain injury can damage aspects of perception or can interfere with “consciousness” associated access mechanisms, preventing the consciously correlated experience of certain types of percept whilst leaving access to these perceptual processes at a non-conscious level intact. Importantly, however, brain processes taking place in both the access areas and the perceptual areas can be regarded as non-conscious, with the “access areas” responsible for selectively forming the products of the perceptual processing areas into a personal narrative. It is only the personal narrative, we argue, that is accompanied by personal awareness.

Despite increasing, persuasive evidence from psychological and neuropsychological research over the past 30 years demonstrating the involvement of non-conscious processes in generating the “contents of consciousness,” there has been a widespread reluctance to draw the natural conclusion that both aspects of “consciousness” (experience and contents) depend on non-conscious mental processes. The intuitive preference for retaining a conscious-experience led model of mental processing is supported by long-standing beliefs, nurtured by daily experiences whereby “self” and “consciousness” are inextricably linked to all forms of perception and motor control.

However, we argue that attributing psychological/executive functions to “conscious experience” (personal awareness) contributes little to the explanatory account of the processes responsible for our ongoing stream of psychological states.

In particular, we include all contents of “consciousness” such as intentions, the perception of self, and the experience of executive control, as products of non-conscious processes. Non-conscious brain systems carry out all core biological processes and our account is consistent in suggesting that psychological functions, including those normally attributed to “consciousness” should be regarded as no different (Hassin, 2013). Non-conscious causation provides a more plausible (albeit non-intuitive) basis for explaining both what is conventionally considered to be “contents of consciousness” and the concurrent “experience of consciousness.” It is also consistent with the observation that, “in the rest of the natural sciences, especially neurobiology, the assumption of conscious primacy is not nearly as prevalent as in psychology. Complex and intelligent design in living things is not assumed to be driven by conscious processes on the part of the plant or animal, but instead by blindly adaptive processes that accrued through natural selection” (Bargh and Morsella, 2008, p. 8).

Also, in relation to social and cultural contexts, there is increasing evidence that non-conscious neural systems arrive pre-configured with developmentally receptive psychological tools designed to navigate social environments and challenges (Cosmides and Tooby, 2013). The ability to share the contents of our individual psychological states with others however confers a social benefit and a powerful evolutionary advantage (Jaynes, 1976; Humphrey, 1983; Barlow, 1987; Dunbar, 1998; Charlton, 2000; Velmans, 2000; Frith, 2007, 2010; Baumeister and Masicampo, 2010). In particular, we argue that it is precisely the capacity to communicate selectively the contents of our non-consciously generated personal narrative that confers an evolutionary advantage, and not the “experience of consciousness” per se.


Anthropomorphism and the Search for Meaning

Having hopefully displaced “consciousness” from it's traditional executive driving seat, our account naturally begs the question as to its purpose or function, in particular, why did consciousness arise in evolving organisms if it doesn't appear to do anything? To address this, a consideration of the functional explanations offered for other apparently evident but equally mysterious phenomena may be helpful.

Rainbows result from the bending of sunlight passing through raindrops, which act like prisms to create a distinctive arc of colors in the sky, with red on the outer part and violet on the inner section. Despite appearances, the rainbow does not occupy a particular place, its apparent position depends on the observer's location in relation of the sun. Nevertheless, like “conscious experience,” rainbows are subjectively “real” phenomena produced by physical processes. However, before the physical explanation was discovered, many different cultures felt compelled to attribute a range of different functions or purposes to the existence of the rainbow phenomenon. For example, a biblical version regards rainbows as a sign from God to never again flood the earth and kill every living thing (Genesis 9:8–15). In Graeco-Roman mythology, the rainbow was considered to be a path between Earth and Heaven. In Chinese culture it was believed to be a slit in the sky sealed by a goddess using stones of five different colors. In Irish mythology, the point where the rainbow makes contact with the earth was said to indicate the elusive hiding place of a pot of treasure.

Most of these accounts can be seen as instances of a wider predisposition toward anthropomorphism, a predisposition to attribute intentions, beliefs, and characteristics to non-human and inanimate objects and events, which we would argue is deeply embedded in non-conscious psychological processes. Anthropomorphism itself can be seen as an example of a wider human “drive for causal understanding” (Gopnik, 2000) that can lead to confabulations and delusions in some neuropsychological conditions, and also in neurologically intact individuals (Coltheart, 2016), particularly given the apparent predisposition in humans toward abductive inference (Fodor, 2000). Gopnik (2000) suggests that “explanation may be understood as the distinctive phenomenological mark of the operation of a special representation system”. “designed by evolution to construct …. “causal maps”…abstract coherent, defensible representations of the causal structure of the world around us … “as” the phenomenological mark of the fulfillment of an evolutionarily determined drive”. The result is occasionally manifest in “magical, mythical, and religious explanations,” especially in situations where the alternative is having no explanation at all, but overall it is “consistent with the view that the [representational] system evolved because, in general, over the long run, and especially in childhood, it gives us veridical information about the causal structure of the world” (Gopnik, 2000, p. 315).

Rainbows and other celestial phenomena such as eclipses and the northern lights are indisputably as “real” as personal awareness. However, little is gained, by asking “what is the purpose or function of an eclipse or a rainbow?” Indeed, posing such a question assumes some hidden, significant explanation to be discovered. Importantly, in our view personal awareness, like rainbows and eclipses, is not a product of evolutionary selection processes and does not have a demonstrable evolutionary purpose in its own right. Rather it is the incidental accompaniment to the final stages of the information processes in the brain responsible for creating a personal narrative. In the same way that there is arguably no purpose to an eclipse or a rainbow, we suggest the same for personal awareness. Personal awareness just “is,” though as humans we feel compelled to “explain” it by attributing a functional capacity, purpose, or meaning to it and in so doing, we argue, has generated a host of misconceptions. In the case of “consciousness,” the exquisite temporal contiguity between personal awareness and the contents of the personal narrative have understandably and readily provided a reliable, intuitive and commonly unquestioned explanation for a compelling causal association between the two that remains particularly difficult to argue against.

The dangers of drawing such anthropomorphic attributions or explanations was nicely captured by Albert Einstein (quoted in Home and Robinson, 1995, p. 172): “If the moon, in the act of completing its eternal way around the earth, were gifted with self-consciousness, it would feel thoroughly convinced that it was traveling its way of its own accord on the strength of a resolution taken once and for all”. We should be wary of making the same mistake with consciousness.

A similar misattribution surrounds the experience of a phantom limb following amputation, often associated with pain and still considered by many as counter-intuitive and anomalous (Halligan, 2002). Historically, in keeping with religious beliefs at the time, this common phenomenological experience was initially explained as being the product of a miraculous form of limb restoration (Halligan, 2002). This explanation also avoided the necessity to challenge the compelling folk account that it was not possible to feel a body part that was no longer physically present. The source of this misconception was nicely addressed by Melzack (Melzack, 1992; Saadah and Melzack, 1994) who points out “Phantoms become comprehensible once we recognize that the brain generates the experience of the body. Sensory inputs merely modulate that experience; they do not directly cause it. p. 126” (Melzack, 1992).
https://www.frontiersin.org/articles/10.3389/fpsyg.2017.01924/full

« Última modificação: 30 de Novembro de 2017, 21:59:26 por Gigaview »
"Quem for brasileiro, siga-me." Duque de Caxias

"Vamos mudar isso aí. Tá OK?" Capitão Mito Bolsonaro

Offline Gigaview

  • Nível Máximo
  • *
  • Mensagens: 13.790
  • "Minha espada não tem partidos."
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1304 Online: 30 de Novembro de 2017, 21:30:22 »
Continuação....

Citar
The Oakley-Halligan Account

A key feature of our account (some of which has been anticipated by others) is that it does not set out to offer an explanation for the subjective “experience of consciousness” but rather to highlight what we consider to be the fundamental misconception rooted in everyday experience and embedded in the powerful folk-view of the nature of “consciousness.” Central to our view, developed over many years (Oakley, 1985, 1999a,b, 2001; Oakley and Eames, 1985; Halligan and Oakley, 2000; Brown and Oakley, 2004), is the simple proposition that all neuropsychological processing takes place independently of the experience of “consciousness.” This is not to deny the powerful and ubiquitous existence of “conscious experience” but rather to claim that all executive psychological processes irrespective of how quickly and intuitively causally they might appear, actually reflect background neuropsychological activity that takes place in non-conscious systems. As noted earlier, to avoid unwanted associations embedded in traditional accounts of “consciousness” we have choosen to use the terms “personal narrative” and personal “awareness” in our account in place of “contents of consciousness” and “experience of consciousness.”

In our view (summarized in Figure 1), it is more parsimonious to conclude that personal awareness is a phenomenal accompaniment of a continuously updated, and individually-oriented Personal Narrative, produced and coordinated by extensive non-conscious systems forming a Central Executive Structure (CES) (Halligan and Oakley, 2000). This personal narrative represents a small, and selective fraction of the total products of psychological activity taking place in the brain and available to the CES.

Citar
Figure 1.



The Oakley-Halligan model. The schematic diagram shows all current CES functions and other psychological activities as non-conscious processes and their products. The most task-relevant of these psychological products are selected by a Central Executive Structure (CES) to create an ongoing personal narrative via the process of Internal Broadcasting. This personal narrative is passively accompanied by personal awareness - a by-product of Internal Broadcasting. Some components of this narrative are selected by the CES for further transmission (External Broadcasting) via spoken or written language, music, and art to other individuals. The recipients in turn transmit (internally then externally) their own narrative information, which may contain, or be influenced by, the narrative information they have received. The CES also selects some contents of the current personal narrative for storage in autobiographical memory. The contents of external broadcasts contribute (via Cultural Broadcasting) to an autonomous pool of images, ideas, facts, customs, and beliefs contained in folklore, books, artworks, and electronic storage systems (identified as “Culture” in the Figure) that is accessible to others in the extended social group but is not necessarily dependent on direct interpersonal contact. The availability of culturally based resources is a major adaptive advantage to the social group and ultimately to the species as a whole. The CES has access to self- and other-generated externally broadcast content as well as to cultural information and resources, all of which have the potential to provide information that supports the adaptedness of the individual and to be reflected in the contents of their personal narrative. As a passive phenomenon, personal awareness exerts no influence over the CES, the contents of the personal narrative or on the processes of External and Cultural Broadcasting. In the Figure non-conscious process are identified in green and personal awareness (subjective experience) in blue.

The Personal narrative (PN) has compelling real-world face validity—particularly when linked to the notions of “self” and “personhood.” While previously attributed to “consciousness”—given its temporal association with the same—in our account PN is not produced or in any way constrained by conscious experience. Contents of the PN are however experienced by us as embodied individuals. All psychological products of mind are housed within a corporeal framework which ensures that the PN provides meaning for what is happening and preparedness for all embodied action options (movements, gestures, verbalizations, actions) that from a subjective perpective are purely private and publicly inaccessible. This sense of embodiment and meaning is critical to the non-conscious nature of the contents of the PN and forms the gravitational focus of a “psychological self” located within a “bodily self.”

We describe the process of generating this personal narrative as Internal Broadcasting. This process, which we previously referred to as “outing” (Halligan and Oakley, 2000) is similar to other accounts which suggest that there is a special brain function, a form of “interpreter” (Gazzaniga, 1985), which constructs a meaningful account of our non-consciously generated behavior and provides an ongoing explanation for it through a process of “narratization” (Jaynes, 1976). In our account the self-referential personal narrative, created as a product of internal broadcasting from non-conscious systems, is accompanied by personal awareness. In Figure 1, non-conscious processes relating to CES activity including those responsible for the reiterative creation of the personal narrative are shown in the oval “bubble.” The non-conscious end-products of CES activity that form the personal narrative are shown in the rectangle immediately above the bubble. It is important to highlight that the personal narrative (as an output) has no processing capacity of its own, it is simply the end-product of selective, competitive psychological processes. “Personal awareness” is represented by the separate, filled rectangle immediately above the personal narrative. As we discuss elsewhere, we argue against any functional or causal relationship between the personal narrative and personal awareness.

A central role of the CES involves the selection from a wide range of available psychological products those that best reflect ongoing brain actvity in relation to current tasks, facilitating identification of the most relevant behaviors for an individual to engage in, and the choice of the most appropriate actions. The CES draws from these competing sources to create a personal narrative relevant to current needs, although other high-priority brain events may also be represented in the narrative in the form of non-task related thoughts, memories, and emotions such as intrusive reflexive responses, emotional responses, traumatic memories, and actions not as planned, originating outside the CES. Importantly, however, most brain activity, including much of that taking place in the CES, is not represented in the personal narrative. Typically, not included are processes that underlie most basic bodily functions regulated by the CNS, such as breathing, the control of individual muscles, digestion, the onset of sleeping, and waking up [or of events that take place between the two, such as dreaming or the processing, reorganizing, and consolidation of vocabulary and memories (Rasch and Born, 2013; James et al., 2017)]. Also there is no record of the brain activity that underlies the identification of sounds, sights, tastes, smells, and the integration of these into the changing sequence of events and objects in the outside world, or of processes underlying thoughts, actions, likes and dislikes, feelings, and moods. The CNS selects relevant end products from these psychological processes when creating the personal narrative, but typically includes no reference to the how these products were generated. In many situations involving rapid or routine decision making for example, underlying thought processes are not reflected in the personal narrative. Hovever, if the process of making a decision or thinking about a problem becomes part of the task in hand, many of these underlying thoughts may be internally broadcast to form part of the ongoing narrative and hence are accompanied by personal awareness (a parallel “conscious” experience), a distinction that Kahneman (2011) drew between “fast” and “slow” thinking.

Importantly, our account is consistent with phenomenological reality. For instance, I don't know what I am going to say or write next—it simply appears as a thought or verbalization. The personal narrative is not the originator but rather the vehicle through which such non-conscious products are presented. This point was dramatically illustrated by the children's author Enid Blyton, who described how, when beginning a new book she would simply sit at her typewriter and wait, and then “My hands go down on my typewriter keys and I begin. The first sentence comes straight into my mind, I don't have to think of it.….To write book after book without knowing what is going to be said or done sounds silly—and yet it happens. Sometimes a character makes a joke, a really funny one, that makes me laugh as I type it on my paper—and I think, ‘Well, I couldn't have thought of that myself in a 100 years!’ and then I think, ‘Well who did think of it then?’” (Stoney, 1992, p. 216–217). Equally, there is anecdotal and research evidence that apparently spontaneous acts of creativity in science and art arise through non-conscious processes, as with recalling the maiden name of our mother, the results of which are later incorporated fully-formed into the personal narrative, often after a period of sleep or distraction (Ghiselin, 1952; Miller, 1962; Ritter et al., 2012). This is not to deny that they may then be further refined or incorporated, by equally non-consciously generated thought processes, if they become part of the ongoing task represented in the personal narrative.

An integral and key aspect of the personal narrative process, we argue, is the incorporation of a self-referential perspective. This provides for the sense of agency and autobiographical time, as well as ownership and responsibility for what are considered to be our internally generated thoughts, actions, percepts, sensations etc. Agency in the context of movement is preserved in the personal narrative by the introduction of the representation of an intention to act in close temporal proximity to the relevant body part movement. This coherence is important for maintaining a consistent, meaningful personal narrative where the notion of self is represented as being the key reference for executive control. It is also consistent with the observation that neural indicators of an impending movement precede the appearance of the intention to move in the personal narrative.

The CES monitors and, where necessary, amends the contents of the personal narrative on an ongoing basis to ensure current and retrospective consistency in relation to self over time and to avoid and resolve internal conflicts (cognitive dissonance). Importantly the ongoing personal narrative (comprising thoughts, beliefs, ideas, intentions, perceptions, feelings etc.,) is available for storage in whole or in part in episodic/autobiographical memory systems and these serve in turn as an important reference point for future action. In this sense, episodic memory is based on a current account of events (the personal narrative) created by the CES, colored and shaped by individual needs, beliefs and goals and forms the basis on which the past is represented and on which current beliefs, behavior and thoughts can be justified, particularly in interaction with other individuals.

Finally, we propose that the creation of this consistent personal narrative confers an evolutionary advantage for the individual in the form of survival and reproductive benefits through the ability to selectively share its contents, and via a potentially wider benefit for the human species as a whole (Wilson, 1975; Dawkins, 1976; Halligan and Oakley, 2015). The social advantage would be expected occur initially within families and near relatives, extending to progressively wider groups with close genetic relationships. We refer to the first stage of the process of sharing narrative information with other individuals as External Broadcasting. This involves the transmission of private mental psychological contents, such as thoughts, ideas, concepts, beliefs, abstractions, sensations, feelings, urges, and concerns from the personal narrative, implicitly via facial expression, posture and gestures but, most importantly, conventionally through speech and other means, such as writing, art, music and electronic media. The CES also has access to shared information deriving through channels emphasized in social mirror theory, such as song, dance, and various forms of play, especially that involving make-believe and role-taking (Whitehead, 2001).

A further, third stage, Cultural Broadcasting, is the process by which information, thoughts and ideas enter a communal or social pool (labeled “Culture” in the figure) which is not dependent on direct contact between individuals and is represented in written or digital materials, artifacts, and social structures.

Individuals receive both their own and others external broadcasts via relatively autonomous (modular) lower-level perceptual and sensory systems. An important role of the CES involves monitoring both of these inputs to incorporate relevant information from external broadcasts of others into its own ongoing processing and in the case of the individual's own external broadcasts to correct or update earlier transmitted information if necessary. Individual reasoning is largely intuitive, self-centric, and biased in favor of existing beliefs (Mercier and Sperber, 2017), but in social contexts, while individuals seek to confirm their own viewpoint through argumentation, they can be exposed to conflicting views of others via the process of mutual external broadcasting and can critically assess them, leading ultimately to the development and circulation of better-formulated social policies and scientific beliefs.

The CES is also able to access cultural information. In terms of the model we are presenting, “Culture” has a dynamic element in that it originates in, and is mediated initially by, individual External Broadcasts. More importantly, it comprises a supra-individual system (artifacts, books, internet etc.,) accessible directly by individuals via downstream non-conscious systems and thereby upwardly available to the individual's CES and may ultimately be reflected in the content of their personal narrative. Cultural Broadcasting is a one-way process. “Culture” is being fed into via the external broadcasts of individual personal narratives but it attains an independent status a resource or a context that is accessible to individuals rather than being actively outputted to them.

While both External and Cultural Broadcasting are supra-individual, it is important to emphasize that humans are highly adapted and indeed prone to take advantage of feedback they receive via external and cultural broadcasting from others and from their environments. Humans are equipped, for example, with inbuilt predispositions including the generation of a sense of agency, the tendency to infer causality from environmental and social events, to attribute human characteristics to non-human and inanimate objects and phenomena, to develop a Theory of Mind and to respond to interindividual influences such as instruction, suggestion, and the transmission of beliefs. We have considered some of these above and explore examples of adaptive receptivity further below. For now, however, it is important to note that in our view all of such adaptations are mediated solely by non-conscious processes.
https://www.frontiersin.org/articles/10.3389/fpsyg.2017.01924/full
« Última modificação: 30 de Novembro de 2017, 21:37:26 por Gigaview »
"Quem for brasileiro, siga-me." Duque de Caxias

"Vamos mudar isso aí. Tá OK?" Capitão Mito Bolsonaro

Offline Gigaview

  • Nível Máximo
  • *
  • Mensagens: 13.790
  • "Minha espada não tem partidos."
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1305 Online: 30 de Novembro de 2017, 21:32:20 »
Continuação...

Citar
Similarities to Other Accounts

Currently influential psychological views of consciousness are broadly classifiable as global workspace and higher order theories. Representing the first of these is Baars (1997) “Theater of Consciousness.” Central to this metaphorical account is the view that within the brain there are neural areas that “work together to display conscious events” (p. ix) and to produce a coherent story—by analogy to the writers, directors, producers etc who are responsible for what occurs on stage. This has clear similarities to our “personal narrative” but in the “theater” account “consciousness” appears to be a distinct entity with a specific role, it “creates access to many knowledge sources in the brain” (p. 6). In our account the personal narrative (“contents of consciousness”) or personal awareness (“experience of consciousness”) are both end products of non-conscious processes and have no active role.

Higher order theories (see Carruthers, 2007) view consciousness as a property of a second more executive level of processing by which, for example, we not only perceive (say, the rainbow) but become aware of our perceptions (i.e., are aware of being aware of seeing a rainbow). In our model this second level of processing is represented in the non-consciously generated personal narrative independently of the parallel experience of consciousness. In common with our own account neither theater nor higher-order theories offer a solution to the “hard problem” of how the processes they propose produce a subjective/conscious experience (personal awareness) within a physical entity such as the brain.

As functionalists, the proponents of theater and higher-order theories could however argue that there is no need to distinguish a separate mental property (“personal awareness”) above and beyond the generic functional property that mental states are internal states of thinking creatures. As such, there is no hard problem to be solved. In our account, while never denying the phenomenal existence of consciousness (personal awareness), we adopt an epiphenomenalist view, whilst recognizing its acknowledged lack of intuitive appeal. We argue that subjective mental experiences are non-efficacious or “collateral” products of neurophysiological activity without an obvious proximal purpose in the same way that rainbows and eclipses are in relation to underlying physical processes. Nevertheless, we recognize that in the search for meaning, personal awareness as with eclipses and rainbows has been endowed variously by tradition and folklore with both a function and a capacity to interact.

In sum, we propose that consciousness (personal awareness) is a product of antecedent brain processes and has no functional role in itself for influencing subsequent brain states. As such, lacking an executive function, we consider the experience of consciousness as epiphenomenal. We accept that when we refer to, and talk about, personal awareness this reference is not caused by personal awareness itself but is part of the narrative generated directly by ongoing neural processes. For our part we defer the hard problem on the assumption that ultimately cognitive neuroscience, information theory and related disciplines will identify the processes that are accompanied by subjective experience and provide some insight into the underlying mechanisms creating the rainbow that is conscious experience.

There are also some similarities between the model we present, and other recently published theoretical views. For example the Passive Frame Theory (Morsella et al., 2016a,b), argues that the contents of “consciousness,” including a self-focused narrative, are generated by non-conscious processes with awareness of these contents being a later arriving accompaniment. This account however goes on to conclude, however, that “consciousness” serves an intrapersonal role, critical for the functioning of the skeletal muscle output system. By contrast, in our model we propose that the main advantage of creating a self-referential personal narrative is a social one deriving from the ability to share its contents with others. Pierson and Trout (2017) also emphasize the intra-personal function of consciousness, describing the experience of consciousness in particular as an evolved force separate from brain function that underlies volition and free-will especially in relation to movement. In their view, consciousness can exert an active downward influence on brain processes, in particular it can initiate volitional movements, which are then executed by non-conscious processes in the brain. Though the authors present a case for why “consciousness” evolved they accept that there is at present no explanation of the mechanism by which an apparently non-physical “consciousness” could be created in living systems. The latter is consistent with our own epiphenomenalist stance, and the view that the experience of consciousness is devoid of any executive capabilities. In contrast, a proposed data compression approach to understanding the phenomenon of “consciousness” derived from theoretical computer science (Maguire et al., 2016), emphasizes the social relevance and evolutionary advantage deriving from the development of adaptive strategies including the ability to predict the behavior of others based on a strong representation of the self. A similar information processing account (“attention schema theory”) proposed by Graziano and Webb (2017), views the experience of consciousness as linked to the development over a long evolutionary period of a self-referential internal model of awareness. In common, with our account the attention schema model presents conscious experience as an accompaniment but in contrast does not address the contents of consciousness. Also, the relationship we propose between the personal narrative and episodic memory has a number of points in common with the views of Mahr and Csibra (2017), particularly in relevance to social interaction.

The Construct of “Self”

According to Damasio (2003) the self “is not a thing but a process, one that produces phenomena ranging from the very simple (the automatic sense that I exist separately from other entities) to the very complex (my identity, complete with a variety of biographical details)”(p. 227). In particular, he notes that it acts as a symbolic reference point for other mental contents as well as providing a self-centered view of the world so that objects and events are seen from the perspective of the organism that the self symbolizes. We suggest that this embodied “self” forms the basis for the idea that we own both our mental processes and our embodied form and “with the assistance of past memories of objects and events, we can piece together an autobiography and reconstruct our identity and personhood incessantly“ (Damasio, 2003, p. 277).

The creation of a stable executive reference system, the “self” (Prinz, 2003), is central to our non-executive account of “consciousness” where we see it as another strategic high level product of non-conscious CES systems offering as it does a critical focus point for the personal narrative. In other words, the embodied self or “center of narrative gravity” (Dennett, 1991) is a conduit for internal broadcasting and an attributional locus for executive capacity including control over psychological functions. As such, it provides a consistent, coherent gravitational center, and reference point for all externally broadcasted contents of the personal narrative and subsequent wider social interaction. The embodiment of “self” as an independent agent in the world is a developmentally evolving mental representation, which we suggest stems from a form of inherited archetype, similar to the self-acquisition device posited for language development (Chomsky, 1965). Seen as the product of non-conscious CES systems, the construction of self pervades the internally broadcast narrative with a focus, unity, continuity, and consistency over time, while also serving to integrate perception and memories (Sui and Humphreys, 2015). Consequently, any disturbance to the development, or normal operations of the internally represented self can result in anomalous subjective experiences such as the depersonalization and disturbed self-other/self-world boundaries seen in schizophrenic spectrum disorders (Mishara et al., 2016). It is arguable that our brains can generate alternative self-related narratives reflecting among other things different social roles we enact in our lives and that these may compete for entry into the personal narrative by the CES depending on the ongoing task.

We agree with Dennett (1991) that the creation of self as a representation comprises part of a survival tactic, analogous to a spider spinning a web, in which we develop a story to inform others, as well as ourselves, of who we are—and “just as spiders don't have to think, consciously and deliberately, about how to spin their webs ……. (we) do not consciously and deliberately figure out what narratives to tell and how to tell them. Our tales are spun, but for the most part we don't spin them; they spin us.” (p. 418)

It is important to note that our account does not challenge the significance and theoretical importance of current concepts of “self-awareness” (“self-consciousness” in traditional terminology) and “self-image” but rather places the processes and constructs they refer to as products of non-conscious systems mediated by the CES and reflected in the personal narrative. Our model proposes that they do not depend on, or require, a collateral “experience of consciousness” (personal awareness).

Solving the “Hard Problem”?

The hard problem (Chalmers, 1996) involves two questions: First: “How and why do neurophysiological activities produce the “experience of consciousness”?”. Our account addresses this by concluding that personal awareness is a passive, emergent property of the non-conscious processes that generate the contents of the personal narrative and is not causally or functionally responsible for those psychological contents. The converse question “How can the non-physical experiences of “conscious awareness” control physical processes in the brain?” is consequenctly no longer relevant. We propose that there are no top-down executive controls exerted by either personal awareness or the personal narrative as both are psychological end-points of non-conscious processes.

A “New Hard Problem”

A major challenge for the future lies however in the discovery of the neural mechanisms underlying personal awareness, though in our view this will not reveal its purpose—just as understanding the physical mechanisms involved in the creation of rainbows or eclipses does not provide an explanation as to their purpose. Nevertheless, as with rainbows and eclipses, it will be satisfying to eventually understand the neural processes behind it. In particular, we need to explore the association of personal awareness with particular types of information processing and whether this is unique to neural systems or can also be created in inanimate systems. However, this future challenge lies within the interdisciplinary domains of physics, philosophy, neuroscience, and information processing rather than cognitive science alone.

A related problem for any line of research that takes personal awareness as its focus is that of devising an objective means of determining its presence. Currently, we infer the existence of personal awareness in others by virtue of a commonality we share in belonging to the same species and having the same neural apparatus and mental states. We can determine the ongoing content of an individual's personal narrative by requesting a verbal report but this does not confirm the presence of personal awareness. If we ask “are you aware of this” and the answer is affirmative, we are inclined to readily accept this as confirmation of the “experience of consciousness”—the age-old philosophical question is whether we would draw the same conclusion if this response was elicited from an inanimate information processing system or indeed was signed by a non-human primate.

Evolutionary Benefit

So is there a purpose of “consciousness”? In our view, given the analogy with the rainbow, pursuing this question is liable to lead to confusion and there is no evolutionary benefit associated with personal awareness per se—it is simply the phenomenological accompaniment to the non-consciously mediated personal narrative. The personal narrative, however, we would argue has significant adaptive purpose for the individual and even more significant social evolutionary advantage, given the ability of individuals to transmit selected contents of their personal narrative to others via the process we have labeled External Broadcasting (see Figure 1). Our account is also broadly consistent with the views of others (Nietzsche, 1974; Jaynes, 1976; Humphrey, 1983; Barlow, 1987; Dunbar, 1998; Charlton, 2000; Velmans, 2000; Prinz, 2006; Frith, 2007, 2010; Baumeister and Masicampo, 2010) who accept that any evolutionary advantage lies not in the “experience of consciousness” (personal awareness) itself, but in the ability of individuals to convey selected aspects of their private thoughts, beliefs, experiences etc. to others of their species. We see personal narratives as having evolved over time and we assume they may not have always been accompanied by personal awareness in their early stage of development. However, at a certain level of computational complexity we assume the parallel quality of the subjective experience (the rainbow) became more evident, and in need of an explanation. An obvious response to the latter, given the temporal contiguities involved and the development of a gravitational self, was the attribution of causal or agentive properties.

Specifically, we regard External Broadcasting as a natural competence of all humans (and some animals) selectively to convey (Internally Broadcast) private psychological contents of the personal narrative (thoughts, ideas, concepts and abstractions, including art and music), as well as experiences (sensations, feelings, urges, concerns etc.), to others, predominantly via gestures and speech. The construction of a personalized identity (the self) by non-conscious systems representative of the “author” of this externally broadcast narrative content, including the attribution of the psychological qualities of awareness and agency, provides for a coherent reference point. The selection of contents of the internally broadcast narrative for External Broadcasting is controlled by the CES within the broad remit of communicating a personalized task-relevant account of current ongoing perceptions, thoughts, ideas, plans etc. to others whilst ensuring that the individual's self appears purposeful and consistent over time within the context of expectancies and beliefs of the immediate social group.

This process is far from linear, with a second or possibly multi-stage process within the CES working to monitor,correct and amend earlier transmitted content during and after external broadcasting. Hence, non-consciously generated slips of the tongue that we all experience and the often subsequent, equally non-consciously generated, “I'm sorry that came out wrong—what I meant to say was…….”. More importantly, the non-conscious processes within the CES generating the personal narrative have access to the externally broadcast outputs of others, as well as the individual's own previous written or digitized outputs. This is important for the future behavior and cognitions of the individual, but also by re-transmission (re-tweeting), via their non-conscious systems, into the personal narratives of others has the potential in turn to influence their future thoughts and ultimately their behavior.

A second supra-individual level of transmission, Cultural Broadcasting (see Figure 1), is achieved via artifacts, writing, books, art, music, and more recently through radio, television, social media, and films, creating a pool of knowledge, skills, ideas, and beliefs potentially accessible to all members of the species. Ultimately, shared information and beliefs are shaped through Cultural Broadcasting into autonomous self-sustaining social systems traditionally embodied in education, art, social norms, and laws, and in long-term physical systems such as libraries and museums. Internal and External broadcasting as well as access to cultural resources may confer some survival advantage for the individual, but the major evolutionary driver and beneficiary is the group-benefit conferred by the process of Cultural Broadcasting and the establishment of an autonomous, supra-individual pool of culturally based-resources.

In this section and others that follow, where we use the established terms “mind,” “contents of mind,” “mind-reading” etc—it is important to underline that within our model, all of these refer to non-conscious processes and constructs. In social contexts, non-conscious systems orchestrate the external transmission of selective contents of the personal narrative, allowing the knowledge and perspective of individuals to be shared more widely with others in the group. This facilitates the fluidity of co-operation, sharing of information, and the development of adaptive strategies, as well as the construction of a Theory of Mind and the attribution of an awareness of self to others, at both an individual and cultural level (Humphrey, 1983; Aktipis, 2000; Charlton, 2000; Frith, 2007, 2010; Graziano and Webb, 2017).

The individual, social, and cultural significance of the development of a Theory of Mind, particularly through pretend play as a basis for “mind reading” is increasingly recognized and the failure to do so at an individual level can be related to autism (Baron-Cohen, 1995; Frith and Happé, 1999; Heyes and Frith, 2014). In addition to the potential for predicting and influencing the thoughts and behaviors of others, there is a broader social dimension via cultural broadcasting of beliefs, prejudices, feelings, and decisions originating in non-consciously generated personal narratives. This in turn, raises the possibility that the mental content of individuals can be changed by outside influences such as formal education, new forms of social media, and music. The broadcasted or communicated narrative allows humans to take a shared-view, rather than an exclusively self-referential view. Revealing the content of our personal narrative to others: including our beliefs, prejudices, feelings, and decisions allows group members to characterize others and generate strategies, such as predicting their behavior, in particular through the capacity for “mind reading” (Heyes and Frith, 2014), all of which is potentially beneficial for social or species survival.

Communicating the contents of the personal narrative is also importantly a means of disseminating ideas that can be incorporated into social systems including the widespread, well-recognized concepts of free-will and natural law. Indeed, given their cultural prominence in most social and democratic cultural systems, it seems likely that these are significantly embodied in non-conscious systems for social adaptive advantage. Importantly, the social sharing of personal narratives allows for the possibility that their content can also be changed, again via non-conscious systems, by outside influences such as education and socializing.

At a cultural level, norms and values generated through individual interaction compete in society as “memes” that service the process of cultural evolution (Dawkins, 1976; Plotkin, 1994; Blackmore, 1999). It is inevitable perhaps that competition between memes has on occasions led to conflict and bloodshed, but on balance the outcomes in the form of social constructs such as democracy, human rights, equality, socialism, and capitalism, can be regarded as beneficial and species-enhancing. None of the social systems that human societies depend on are possible, however, without the smooth and consistent ability to share the contents of individual personal narratives.

External Broadcasting: the Social Role; Hypnosis and Suggestion

Contained within external broadcasts can be direct verbal suggestions (including hypnotic suggestions) that can influence a range of psychological phenomena, including so-called “automatic” processes, in the recipients and which may relate to a socially adaptive human trait (Halligan and Oakley, 2014; Terhune et al., in press). As an example, it is widely accepted that perception involves a constructive process than relies on non-conscious inferences based on past experience and prior knowledge (Gregory, 1997) and that as a consequence, we as individuals cannot, for example, change our perception of the colors in a Mondrian picture by the exercise of voluntary intention or choice. However, this colorful display can be turned into a gray scale image by appropriate suggestions, particularly in highly hypnotically suggestible individuals (Kosslyn et al., 2000; McGeown et al., 2012). In a recent study, Lindeløv et al. (2017) have shown, in a randomized actively-controlled trial, that working memory performance can be effectively restored by suggesting to hypnotized brain injured patients that they have regained their pre-injury level of working memory functioning. Phenomena of this sort have led to the increasing use of hypnosis with direct verbal suggestion as a tool in cognitive research as well as being a topic of interest in its on right (Oakley and Halligan, 2009, 2013; Oakley, 2012; Halligan and Oakley, 2013; Landry and Raz, 2016; Terhune et al., in press). Hypnosis-based research, including Kihlstrom's classic “The Cognitive Unconscious” paper (Kihlstrom, 1987), has been influential in developing our model.

The wider significance of these studies is that, whilst the effects of hypnotic suggestion can at first sight appear extraordinary (i.e., beyond that which would be expected), direct verbal suggestibility is normally distributed in human populations and can be seen as a prime example of a broader socially adaptive trait that is powerfully capable of harnessing aspects of our non-conscious systems (Halligan and Oakley, 2014). Consistent with this, empathy is one of the few personality traits correlated with hypnotic suggestibility (Wickramasekera and Szlyk, 2003) and is also associated with the ability to share at second hand an experience such as pain with another (Singer et al., 2004). On this basis, one plausible explanation for the widespread ability to respond to verbal suggestion is that suggestion underlies a socially cohesive ability to indirectly share experiences by re-creating them in others, not dissimilar to the function of “mirror” neurons that fire both when an animal acts and when the animal observes the same action performed by another.

In a similar vein it has been proposed that the often neglected psychological capacity of suggestibility has a more powerful social impact as a means of transcending reality (Schumaker, 1991) and understanding the minds of others, as well as promoting attachment and other cohesive social processes (Ray, 2007; Halligan and Oakley, 2014). It has also been noted that experiences similar to those produced in response to hypnotic suggestion are seen cross-culturally associated with religious and spiritual beliefs and practices, again indicating an important sociological function (Dienes and Perner, 2007). Related to this, hypnotic suggestion has been shown in fMRI studies to reliably produce experiences of alien control, thought insertion, and automatic writing seen in spirit possession, mediumship, and shamanism (Deeley et al., 2014; Walsh et al., 2014).
https://www.frontiersin.org/articles/10.3389/fpsyg.2017.01924/full
« Última modificação: 30 de Novembro de 2017, 22:13:56 por Gigaview »
"Quem for brasileiro, siga-me." Duque de Caxias

"Vamos mudar isso aí. Tá OK?" Capitão Mito Bolsonaro

Offline Gigaview

  • Nível Máximo
  • *
  • Mensagens: 13.790
  • "Minha espada não tem partidos."
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1306 Online: 30 de Novembro de 2017, 21:34:54 »
Continuação....

Citar
Suggestion, Dissociation, and Related Clinical Conditions

One advantage of our account is that it provides a potential framework for explaining several enigmatic phenomena such as, suggestibility, dissociations between implicit and explicit awareness and dissociative phenomena more generally. As noted above, all humans are responsive to some extent to direct verbal suggestion, typically contained within the external broadcast from another individual, and this responsiveness may reflect a socially adaptive trait. The most widely researched example is hypnotic suggestion, where the suggestion (defined as a communicable belief or perception) is delivered following a hypnotic induction procedure (Halligan and Oakley, 2014). According to our account, congruent responses to an external direct verbal suggestion result from non-conscious systems in the recipient's brain being recruited to engage in a socially-driven role-play by creating neural activity consistent with the suggested change itself (Oakley and Halligan, 2009, 2013). As a result, suggested experiences become part of the recipient individual's internally broadcast personal narrative, and concurrently also part of their personal awareness, and so are experienced as real, albeit involuntary, events. For example, suggested, but not imagined, experiences of pain are accompanied by activity in brain areas involved in pain processing (Derbyshire et al., 2004).

Similarly, involuntary hand movements following the suggestion that the hand is being moved passively by a pulley show the same patterns of neural activity as an actual passive movement (Blakemore et al., 2003) and when limb paralysis is suggested, but not when it is feigned, there are inhibitory changes in related motor areas similar to those seen in a hysterical limb paralysis (Halligan et al., 2000; Ward et al., 2003; Deeley et al., 2013a). Within the personal narrative, the account is of an actual primary experience, with the suggested effects being recorded, and reported, as involuntary. Interestingly, a record of hearing the suggestion itself may also be part of the personal narrative, unless the original suggestion includes source amnesia. It is important to emphasize that, in our model, direct verbal suggestion is seen as being received (via external broadcasting) and processed via the recipient's non-conscious sensory systems. As a result, brain states congruent with the suggestion are generated by central executive structures in accordance with the externally directed role-play. The results of this process are then broadcast into the personal narrative by central executive structures with the accompanying, parallel conscious experience. Consequently, the process initiated by a direct verbal suggestion is entirely bottom-up in its execution.

The idea of a non-conscious, motivated role-play underlying the effects of external suggestion also provides an explanation for some clinical conditions with the caveat that the “suggestion” or false belief (delusion) may be generated internally by non-conscious systems (Halligan, 2011). Consistent with this, hypnotic suggestion has been used to create experimental analogs for internal voices (hallucinations) and passive (alien) or unwilled (anarchic) movements seen in clinical conditions such as schizophrenia and in the culturally driven experiences of thought insertion and automatic writing (Blakemore et al., 2003; Deeley et al., 2013a,b, 2014; Walsh et al., 2014, 2015b), as well as to create delusions and disorders of belief (Cox and Barnier, 2010; Connors, 2015), such as those underlying the inability to recognize one's own reflection (Connors et al., 2013) and the transformation of gender identity (Noble and McConkey, 1995).

In motor conversion disorder (hysteria), as in hypnosis, the observed dissociative symptoms of paralysis, aphonia etc. are not related to known physical or physiological damage but rather are represented as subjectively powerful, “real” phenomena within the personal narrative (Oakley, 1999a; Bell et al., 2011). Even more dramatic perhaps, as a partial analog of dissociative identity disorder (multiple personality), is the phenomenon of the “hidden observer” in hypnosis (Hilgard, 1977) in which a parallel narrative process is suggested, classically in an individual concurrently experiencing suggested analgesia (Hilgard et al., 1975). This dissociated second narrative state can then be cued to represent the feeling of pain in the personal narrative, returning to the analgesic state narrative when the cue is reversed. The “hidden observer” reflects the existence of a second narrative process relating to a single self-representation. In dissociative identity disorder, two (potentially more) representations of self with their associated histories and ongoing experienceses are available for entry into the personal narrative. Importantly, again all of the above implicate bottom-up, rather than top-down, influences on the streams of non-conscious processes that contribute to the content of the personal narrative and consequently to the parallel conscious experience. Specifically, where direct verbal suggestion is involved, the influence arises via a spoken input, is processed low down in the hierarchy of brain processes receiving and analyzing speech, resulting in changes within non-conscious systems that may ultimately be reflected in the recipient's personal narrative (with the accompanying personal awareness).

Free-will and Personal Responsibility

The commonly assumed belief in “free-will” (i.e., a self-directed “voluntary” ability to make non-deterministic, non-random choices between different possible courses of action) has long been considered a hallmark and function of “consciousness” and of “conscious awareness” in particular (e.g., Pierson and Trout, 2017). However, there seems no reason to suppose that this ability is beyond the processing capacities of fast-acting, non-conscious brain systems. If, as we propose, personal awareness, with its ubiquitous sense of self, agency, and decision making, is a accompaniment to underlying psychological processes, what implications does this have for socially-revered concepts of free-will and personal responsibility?

In support of the construct of free-will, it is sometimes argued that, although there is evidence that awareness of the intention to make a movement occurs later than the preparatory neural activity, the act of countermanding the previously experienced intention demonstrates the active involvement of a higher-level “conscious” process (i.e., an exercise of “conscious” free-will). According to our account, any subsequent decision or action to countermand a previously intended movement (for whatever attributable reason), can just as easily be explained as being generated by the same non-conscious systems (equally as an act of free-will) but with the “countermanding intention” only being broadcast temporally later into the personal narrative.

As our account removes any self-serving controlling influence from the contents of the personal narrative and personal awareness, it could be seen to undermine the principle of personal accountability. We, however, consider personal responsibility, a mainstay of the cultural broadcasting architecture and a social contruct critical to most democratic and legal systems, as lying within non-consciously-generated actions and intentions transmitted into the personal narrative and in particular where these same contents have been publicly announced via external broadcasting. Both of these events are accompanied, albeit passively, by personal awareness (“experience of consciousness”)—thereby meeting the traditional moral and legal benchmark.

In our account, everyday constructs such as free-will, choice, and personal accountability are therefore not dispensed with—they remain embedded in non-conscious brain systems where their existence as near universal constructs serving powerful social purposes could well be seen in large part to be a consequence of cultural broadcasting impacting on personal narratives.

Conclusions

Historically compelling folk and lay accounts assume that “consciousness” provides for some executive control over the psychological processes that populate much of our mental content. This largely unquestioned and intuitively appealing view has received numerous challenges over the past 30 years. Even the most scaled-back accounts, however, appear reluctant to abandon completely the attribution of some kind of executive role to “consciousness.” Overall, these traditional accounts distinguish two main components: the “experience of consciousness” and the “contents of consciousness,” which we refer to as “personal awareness” and a self-referential “personal narrative.”

We take no issue with the experiential primacy or reality of personal awareness and the related powerful sense of agency and self, that we all feel. We argue, however, that central to the traditional domain of “consciousness” is a personal narrative created by and within inaccessible, non-conscious brain systems where personal awareness is no more than a passive accompaniment to this process. In this view, both the personal narrative and the associated personal awareness are end-products of widely distributed, efficient, non-conscious processing that arrives too late in the psychological process cycle for there to be a reason to infer the necessity of an additional independent executive or causal capacity to either of them.

As far as our model is concerned, the contents of the personal narrative are end products of non-conscious systems. The fact that personal awareness (Huxley's steam whistle) accompanies the contents of the personal narrative is causally compelling but not relevant to understanding and explaining the psychological processes underpinning them (Huxley, 1874). We have argued that the everyday perception/belief of a casual association between the “experience of consciousness” and the “contents of consciousness” is based on a longstanding, albeit understandable, misattribution/misconception. We nevertheless accept that the non-conscious processes involved in creating the personal narrative may also create the experience of consciousness, much in the way that the hidden processes of reflection, refraction, and dispersion of light from water droplets generate the perception of the rainbow. In terms of our account, the “hard question” of how consciousness can influence brain processes is not so much “hard” as simply “wrong.” We are left with the reverse, equally “hard” but, from a cognitive psychological perspective, not theoretically relevant question of how non-conscious processes creating the personal narrative also appear to create an experience of consciousness.

While our account does not deny the reality of personal awareness or its association with personal narrative contents, we conclude that considering personal awareness as a form of high level executive psychological process has hindered the understanding of the nature and structure of the more relevant underlying psychological systems. The proper focus for both research and theory going forward is those neuro-psychological processes that underly the personal narrative, which represents a continuously updated, self-related, meaningful, and selective account of on-going activity created by and within non-conscious systems. The personal narrative account informs historically consistent behavior in ongoing situations, provides potential content for retention in autobiographical memory and defines the self-related information available for communication to others. This is congruent with the view that autobiographical/episodic memory is not a record of events per se but is a partial and selective record of a personalized narrative about events.

As a real, but essentially non–executive, emergent property associated with the selective internal broadcasting of non-conscious outputs that form the personal narrative, we consider personal awareness to lack adaptive significance in much the same way as rainbows or eclipses. The non-consciously generated personal narrative on the other hand forms the basis for both significant individual and social adaptive advantage. The main evolutionary advantage, lies in the selective public transmission of contents of the personal narrative, again under the control of non-conscious systems, and the sharing of these essentially private contents (thoughts, feelings, and information) with others in the local and wider social group. As part of this adaptive process, individuals are predisposed not only to transmit information from their own personal narrative but also to receive and process the externally and culturally transmitted outputs from others. In becoming available to others the broadcast (and re-broadcast) content of individual personal narratives supports the mutual understanding of the drivers behind thought and behavior. This in turn facilitates the dissemination of ideas and beliefs, and ultimately the construction of resilient supra-individual social, cultural, and legal systems which has contributed to the stability and evolutionary adaptedness of the species.
https://www.frontiersin.org/articles/10.3389/fpsyg.2017.01924/full
"Quem for brasileiro, siga-me." Duque de Caxias

"Vamos mudar isso aí. Tá OK?" Capitão Mito Bolsonaro

Offline criso

  • Nível 15
  • *
  • Mensagens: 360
  • Sexo: Masculino
  • γνῶθι σεαυτόν
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1307 Online: 30 de Novembro de 2017, 23:09:34 »
vou checar isso aí
Visita
Interiora
Terrae
Rectificandoque
Invenies
Occultum
Lapidem

e que as rosas floresçam em vossa cruz!

Offline Gigaview

  • Nível Máximo
  • *
  • Mensagens: 13.790
  • "Minha espada não tem partidos."
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1308 Online: 05 de Dezembro de 2017, 00:59:17 »
Citar

Why Panpsychism Fails to Solve the Mystery of Consciousness

Is consciousness everywhere? Is it a basic feature of the Universe, at the very heart of the tiniest subatomic particles? Such an idea – panpsychism as it is known – might sound like New Age mysticism, but some hard-nosed analytic philosophers have suggested it might be how things are, and it’s now a hot topic in philosophy of mind.

Panpsychism’s popularity stems from the fact that it promises to solve two deep problems simultaneously. The first is the famous ‘hard problem’ of consciousness. How does the brain produce conscious experience? How can neurons firing give rise to experiences of colour, sound, taste, pain and so on? In principle, scientists could map my brain processes in complete detail but, it seems, they could never detect my experiences themselves – the way colours look, pain feels and so on: the phenomenal properties of the brain states involved. Somehow, it seems, brain processes acquire a subjective aspect, which is invisible to science. How can we possibly explain this?

The second problem concerns an apparent gap in our scientific picture of the world. Physics aims to describe the fundamental constituents of the Universe – the basic subatomic particles from which everything is made, together with the laws that govern them. Yet physics seems to leave out something very important from its picture of the basic particles. It tells us, for example, that an electron has a certain mass, charge and spin. But this is a description of how an electron is disposed to behave: to have mass is to resist acceleration, to have charge is to respond in a certain way to electromagnetic fields, and so on. Physics doesn’t say what an electron, or any other basic particle, is like in itself, intrinsically. And, arguably, it never could, since its conceptual resources – mathematical concepts, together with the concepts of causation and spatiotemporal position – are suitable only for describing structures and processes, not intrinsic qualities. Yet it is plausible to think that particles can’t just be collections of dispositions; they must have some intrinsic categorical properties that give rise to their dispositions.

Here, some philosophers argue, there is scope for an exciting synthesis. Maybe consciousness – the elusive subjective aspect of our brain states – is the ingredient missing from physics. Perhaps phenomenal properties, or ‘proto-phenomenal’ precursors of them, are the fundamental intrinsic properties of matter we’re looking for, and each subatomic particle is a tiny conscious subject. This solves the hard problem: brain and consciousness emerge together when billions of basic particles are assembled in the right way. The brain arises from the particles’ dispositions to interact and combine, and consciousness arises from what the particles are like in themselves. They are two sides of the same coin – or, rather, since on this view consciousness is the fundamental reality underlying physical reality, brains are manifestations of consciousness. As it holds that there is a single reality underlying both mind and matter, panpsychism is a form of monism. The label ‘Russellian monism’ is sometimes used for it and closely related positions, because Bertrand Russell proposed similar ideas in The Analysis of Matter (1927).

Panpsychism also promises to solve another problem. It seems obvious that conscious experiences affect how we behave. Yet it looks as if science will be able to explain our behaviour entirely in terms of brain states, without mentioning consciousness at all. So something seems to get left out here. But if panpsychism is true, then this problem disappears. For brain science is, albeit indirectly, mentioning consciousness when it gives explanations in terms of brain states, since consciousness is just the intrinsic aspect of those states.

There are problems for panpsychism, of course, perhaps the most important being the combination problem. Panpsychists hold that consciousness emerges from the combination of billions of subatomic consciousnesses, just as the brain emerges from the organisation of billions of subatomic particles. But how do these tiny consciousnesses combine? We understand how particles combine to make atoms, molecules and larger structures, but what parallel story can we tell on the phenomenal side? How do the micro-experiences of billions of subatomic particles in my brain combine to form the twinge of pain I’m feeling in my knee? If billions of humans organised themselves to form a giant brain, each person simulating a single neuron and sending signals to the others using mobile phones, it seems unlikely that their consciousnesses would merge to form a single giant consciousness. Why should something similar happen with subatomic particles?

A related problem concerns conscious subjects. It’s plausible to think that there can’t be conscious experience without a subject who has the experience. I assume that we and many other animals are conscious subjects, and panpsychists claim that subatomic particles are too. But is that it? Are there any intermediate-level conscious subjects (molecules, crystals, plants?), formed like us from combinations of micro-subjects? It’s hard to see why subjecthood should be restricted to just subatomic particles and higher animals, but equally hard to think of any non-arbitrary way of extending the category.

Despite these problems, many people feel that panpsychism offers the best hope of cracking the hard problem. The philosophers David Chalmers, Galen Strawson and Philip Goff, among others, have defended versions of it, and there is a lively ongoing discussion of the problems it faces and the best way to respond to them in contemporary philosophical books and journals. Is it the bold move we need to make progress on consciousness?

I remain unpersuaded, and I’m not alone in this. Even if we accept that basic physical entities must have some categorical nature (and it might be that we don’t; perhaps at bottom reality is just dispositions), consciousness is an unlikely candidate for this fundamental property. For, so far as our evidence goes, it is a highly localised phenomenon that is specific not only to brains but to particular states of brains (attended intermediate-level sensory representations, according to one influential account). It appears to be a specific state of certain highly complex information-processing systems, not a basic feature of the Universe.

Moreover, panpsychism gives consciousness a curious status. It places it at the very heart of every physical entity yet threatens to render it explanatorily idle. For the behaviour of subatomic particles and the systems they constitute promises to be fully explained by physics and the other physical sciences. Panpsychism offers no distinctive predictions or explanations. It finds a place for consciousness in the physical world, but that place is a sort of limbo. Consciousness is indeed a hard nut to crack, but I think we should exhaust the other options before we take a metaphysical sledgehammer to it.


So I’m not a panpsychist. I agree with panpsychists that it seems as if our experiences have a private, intrinsic nature that cannot be explained by science. But I draw a different conclusion from this. Rather than thinking that this is a fundamental property of all matter, I think that it is an illusion. As well as senses for representing the external world, we have a sort of inner sense, which represents aspects of our own brain activity. And this inner sense gives us a very special perspective on our brain states, creating the impression that they have intrinsic phenomenal qualities that are quite different from all physical properties. It is a powerful impression, but just an impression. Consciousness, in that sense, is not everywhere but nowhere. Perhaps this seems as strange a view as panpsychism. But thinking about consciousness can lead one to embrace strange views.Aeon counter – do not remove

Keith Frankish

--
http://bigthink.com/aeon-ideas/why-panpsychism-fails-to-solve-the-mystery-of-consciousness
"Quem for brasileiro, siga-me." Duque de Caxias

"Vamos mudar isso aí. Tá OK?" Capitão Mito Bolsonaro

Offline Gigaview

  • Nível Máximo
  • *
  • Mensagens: 13.790
  • "Minha espada não tem partidos."
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1309 Online: 11 de Dezembro de 2017, 20:08:00 »
Experimentos bem sucedidos com macacos mostraram que informação pode ser "injetada" no diretamente cérebro. Futuros experimentos com humanos podem mostrar que isso também será possível mesmo sem a consciência da informação. Aquela idéia no filme Matrix que mostra o download de conhecimentos e habilidades diretamente no cérebro pode se tornar realidade.



http://www.cell.com/neuron/fulltext/S0896-6273(17)31034-6

"Quem for brasileiro, siga-me." Duque de Caxias

"Vamos mudar isso aí. Tá OK?" Capitão Mito Bolsonaro

Offline Gigaview

  • Nível Máximo
  • *
  • Mensagens: 13.790
  • "Minha espada não tem partidos."
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1310 Online: 11 de Dezembro de 2017, 20:14:55 »
Ok...se a informação pode ser "injetada" diretamente no cérebro, talvez também possa ser "extraída" e disponibilizada para download.

Citar
THE DOWNLOADABLE BRAIN: WE’RE CLOSER THAN WE THINK TO IMMORTALITY

Two millennia ago, a young carpenter appeared in what is now Israel and, in addition to suggesting some guidelines on personal behavior, offered the gift of eternal life to those who believed in him. This went over well, since the prevailing religion of his people was noticeably weak in that department, lacking clear rewards for the virtuous. His apostle presented the deal in no uncertain terms: “He that heareth my word,” said John, “and believeth on him that sent me, hath everlasting life.” So far nobody has come back to testify to the veracity of this offer on the next plane of existence, but no one has disproved it, either. So that works for some people. It still doesn’t get to the nub of the matter, though. You still have to die in that scenario.

Some have searched for magic poultices, creams and liquids. In the 16th century, it was Ponce de Leon who reportedly searched Florida for waters that would stave off his rapidly approaching old age. Today, people follow in his footsteps, settling down in Boca, Hollywood and Jupiter Beach to achieve the same objective, with much the same lack of results, and in Beverly Hills, gorgons with crimped, distorted mouths and desiccated eyesacks roam Rodeo Drive, tweaking and slicing into themselves as they worship at the shrine of perpetual youth. Some even look okay at a very great distance.

It’s discouraging. Even if one buys into the notion of reincarnation, you are still only preserving the spirit; consciousness doesn’t make the trip from one life to the next. Plus, there is also the possibility that one will return in the next life as a stoat, or a guy whose karma involves the weekly cleaning of portable toilets at construction sites. Not the true vision of eternal life most of us would like, which involves sticking around without ever shuffling off this mortal coil at all, seeing the world change and evolve over generations.

No, for true advancement towards humanity’s most elusive goal, we must turn to the religion that we tend to like now: Technology. And the good news is that in this area we may actually be on the brink of success. For today, enormous gains are being made in the branch of computer science that is working to deliver eternal life to those who can afford it. Those in the hunt are far from snake-oil salesmen or alt-right marketers of nutty fluids. These are distinguished scientists making the prognostications. Nick Bostrom of Oxford University described the concept: “If emulation of particular brains is possible and affordable,” he wrote in a 2008 paper, “and if concerns about individual identity can be met, such emulation would enable back-up copies and ‘digital immortality.’”

Let’s take a moment to consider why this whole idea is not just futurist bushwah. The human brain, while based on an organic platform, is essentially a vast electronic switching station. If such is the case—or even fundamentally the case, with some, as it were, gray matter on the edges—why not work toward a method of emulating the brain-based persona of the individual in its entirety the way you would make a disc image on your laptop and then, when the operations and digital activities are mirrored in this manner, simply backing it up? Once it’s backed up, it can then be stored in a suitable, safe digital warehouse and then, when that receptacle has been created, downloaded into a young, vital living entity and voila. Old mind. Young body. Just what you always wanted. A hundred years later, you can do it again.

There is already significant scientific literature on the issue of personality transfer. Nobody writing about it doubts it can be done. Christof Koch, Chief Scientific Officer of the Allen Institute for Brain Science in Seattle, and Giulio Tononi, who holds a Distinguished Chair in Consciousness Science at the University of Wisconsin, offered this view on the circular of the Institute for Ethics and Emerging Technologies, “Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry and biology; it does not arise from some magical or otherworldly quality.” Once one assumes this sort of materialist view of the mind, it’s not difficult to imagine moving the contents of this mechanical entity from one housing to another.

“Immortality will be like Tesla—available at first only to the very, very, very rich, then, after a while, commoditized for the upper middle classes but pretty much stopping right there.”

Now, it is true that the task of performing a digital upload of an entire individual consciousness—its knowledge, earned experience, memories going back to the womb—the tech on that part of the process is in its infancy. But gains are being made. Thoughts and simple commands are now being transmitted over short distances by individuals with gizmos attached to their heads, moving little objects around at a distance by the power of their thoughts. It’s not much. But it shows that brain activity can be digitized and transmitted.

But let’s face it. We’re not going to go around with wires sticking out of our heads. The good news is that this really shouldn’t be necessary, not the way things appear to be going. Within just a very few years, the transporting of the electronic entity that is the human brain and all its contents will be vastly advanced—indeed, made possible—by a tremendous development in digital communications: that is, the widespread implantation of the cell phone and all its many wonderful functions right into your cranium.

Do you doubt it? I don’t. Go to any Starbucks, any airport, hotel lobby, public space, and you will see the entire strolling pageant of humanity with their noses firmly attached to a screen. Couples in restaurants. Kids hanging out at home. Staring into the little device. It’s not sustainable. It’s only a matter of time until a new way of inputting that data will be made available to those who want it and can afford it—driven by that ultimate arbiter of product development—consumer demand.

Tell the truth. Isn’t it a pain to be constantly carrying that thing around all the time? How many times a week do you lose it? Wouldn’t you like to be able to employ its many functions simply by touching your head, or maybe even just thinking about something? How would it be to be in touch with the Cloud 24/7? I propose the mastoid bone behind the ear. It’s unoccupied at the moment, totally unmonetized. It’s near the ocular and auditory systems, not to mention the wetware of the brain. It won’t be messing with your spine, which is complicated enough. The mastoid bone is perfect. And won’t it be nice to have your hands free?

Issues of storage have already been solved. The Cloud has already given us so much. Now it can be given the job of housing the collected personas of the ruling class, and still have room for all the pictures, music, movies and personal preferences of every mind on the planet. It is the quintessential storage container necessary to keep your consciousness safe while it awaits your brand new body. Until then, if your old body conks out, don’t worry. You still exist.

That’s it—a solution to the problem of death. Those now in the hunt are powerful, enormously wealthy, and have succeeded in every enterprise to which they have put their imaginative and well-funded minds. And they’re all at the precise age when the prospect of death rears its bony head. They’re also, it’s safe to say, with all due respect for their towering achievements and wealth, toxic narcissists who cannot imagine a world that might continue without them bestride it. If this was a start-up, I’d invest in it.

I figure that when it happens, immortality will be like Tesla—available at first only to the very, very, very rich, then, after a while, commoditized for the upper middle classes but pretty much stopping right there. The rest of humanity will either have access to a very inferior product or have to go ahead and die. When that day comes, there will officially be just two classes of homo sapiens: human beings and immortals. The human beings, all implanted and plugged into the corporate Cloud their entire lives, may not end up being all that sapiens. And as for the immortals, I don’t think they’re going to be very nice. After all, they haven’t been very nice this lifetime around, have they?

You may have noticed that we have yet to consider the final step of the process: the download. I have saved it for last because, frankly, it presents, I believe, a virtually insurmountable problem that attends this entire enterprise. What is this body that is created to house the ancient rich person paying for the procedure? Or, more accurately, who is that? Is it a person? A thing? Some combination of both?

Here are things it won’t be. It won’t be a baby, stuffed with the mind of an Elon Musk in its tiny cranium. It won’t be a mechanical person, a robot, because nobody with a trillion dollars in the bank and a lust for food and sex and power and fast cars is going to want to go around in an unfeeling casket like the ghost in the shell. They’re paying for immortality. They want to roll down the windows and let the wind blow back their brand new hair. And I don’t think any real mogul wants to wait for several years while a pod person grows in its cocoon.

Which is why the final step of this technology will have to be the creation of fully baked, living, organic human beings ready for download, empty of any consciousness whatsoever. And such creatures will refuse to exist. Because with life comes consciousness. And with consciousness comes the drive to exist.

Any brain capable of receiving an entire person has to be functioning on its own. It must be generating rudimentary thoughts of its own. It probably needs to be jump started into some form of consciousness to prove that the download will work. It must be a life. And as a life, it will have all the things that come with that blessing. And here comes the big, rich motherfucker to take that all away? How do we think that transaction is going to work out?

Still, the portability of consciousness is a very seductive and beautiful notion, isn’t it? Personally? I’d rather be housed in a cactus looking out the window of a cottage in Palm Desert than moldering beneath the earth for eternity. So I’m rooting for the bad guys here. God bless tech.
http://lithub.com/the-downloadable-brain-were-closer-than-we-think-to-immortality/
"Quem for brasileiro, siga-me." Duque de Caxias

"Vamos mudar isso aí. Tá OK?" Capitão Mito Bolsonaro


Offline Gigaview

  • Nível Máximo
  • *
  • Mensagens: 13.790
  • "Minha espada não tem partidos."
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1312 Online: 12 de Dezembro de 2017, 22:46:19 »
Não é injeção de conhecimento ao estilo "matrix", mas é bem interessante ainda assim.

...ainda não é mas pode se tornar realidade.
"Quem for brasileiro, siga-me." Duque de Caxias

"Vamos mudar isso aí. Tá OK?" Capitão Mito Bolsonaro

Offline Buckaroo Banzai

  • Nível Máximo
  • *
  • Mensagens: 36.028
  • Sexo: Masculino
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1313 Online: 22 de Janeiro de 2018, 14:23:13 »
...talvez, num futuro muito distante, com alguma tecnologia que nem se faz idéia sequer de com que pareceria.

Será que desenvolveriam um "cabeçote" que estimula a formação precisa de conexões neuronais?

Offline Gigaview

  • Nível Máximo
  • *
  • Mensagens: 13.790
  • "Minha espada não tem partidos."
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1314 Online: 02 de Fevereiro de 2018, 00:22:43 »
Citar
The idea that everything from spoons to stones are conscious is gaining academic credibility


Consciousness permeates reality. Rather than being just a unique feature of human subjective experience, it’s the foundation of the universe, present in every particle and all physical matter.

This sounds like easily-dismissible bunkum, but as traditional attempts to explain consciousness continue to fail, the “panpsychist” view is increasingly being taken seriously by credible philosophers, neuroscientists, and physicists, including figures such as neuroscientist Christof Koch and physicist Roger Penrose.

“Why should we think common sense is a good guide to what the universe is like?” says Philip Goff, a philosophy professor at Central European University in Budapest, Hungary. “Einstein tells us weird things about the nature of time that counters common sense; quantum mechanics runs counter to common sense. Our intuitive reaction isn’t necessarily a good guide to the nature of reality.”

David Chalmers, a philosophy of mind professor at New York University, laid out the “hard problem of consciousness” in 1995, demonstrating that there was still no answer to the question of what causes consciousness. Traditionally, two dominant perspectives, materialism and dualism, have provided a framework for solving this problem. Both lead to seemingly intractable complications.
 
The materialist viewpoint states that consciousness is derived entirely from physical matter. It’s unclear, though, exactly how this could work. “It’s very hard to get consciousness out of non-consciousness,” says Chalmers. “Physics is just structure. It can explain biology, but there’s a gap: Consciousness.” Dualism holds that consciousness is separate and distinct from physical matter—but that then raises the question of how consciousness interacts and has an effect on the physical world.

Panpsychism offers an attractive alternative solution: Consciousness is a fundamental feature of physical matter; every single particle in existence has an “unimaginably simple” form of consciousness, says Goff. These particles then come together to form more complex forms of consciousness, such as humans’ subjective experiences. This isn’t meant to imply that particles have a coherent worldview or actively think, merely that there’s some inherent subjective experience of consciousness in even the tiniest particle.

Panpsychism doesn’t necessarily imply that every inanimate object is conscious. “Panpsychists usually don’t take tables and other artifacts to be conscious as a whole,” writes Hedda Hassel Mørch, a philosophy researcher at New York University’s Center for Mind, Brain, and Consciousness, in an email. “Rather, the table could be understood as a collection of particles that each have their own very simple form of consciousness.”

But, then again, panpsychism could very well imply that conscious tables exist: One interpretation of the theory holds that “any system is conscious,” says Chalmers. “Rocks will be conscious, spoons will be conscious, the Earth will be conscious. Any kind of aggregation gives you consciousness.”

Interest in panpsychism has grown in part thanks to the increased academic focus on consciousness itself following on from Chalmers’ “hard problem” paper. Philosophers at NYU, home to one of the leading philosophy-of-mind departments, have made panpsychism a feature of serious study. There have been several credible academic books on the subject in recent years, and popular articles taking panpsychism seriously.

One of the most popular and credible contemporary neuroscience theories on consciousness, Giulio Tononi’s Integrated Information Theory, further lends credence to panpsychism. Tononi argues that something will have a form of “consciousness” if the information contained within the structure is sufficiently “integrated,” or unified, and so the whole is more than the sum of its parts. Because it applies to all structures—not just the human brain—Integrated Information Theory shares the panpsychist view that physical matter has innate conscious experience.

Goff, who has written an academic book on consciousness and is working on another that approaches the subject from a more popular-science perspective, notes that there were credible theories on the subject dating back to the 1920s. Thinkers including philosopher Bertrand Russell and physicist Arthur Eddington made a serious case for panpsychism, but the field lost momentum after World War II, when philosophy became largely focused on analytic philosophical questions of language and logic. Interest picked up again in the 2000s, thanks both to recognition of the “hard problem” and to increased adoption of the structural-realist approach in physics, explains Chalmers. This approach views physics as describing structure, and not the underlying nonstructural elements.

“Physical science tells us a lot less about the nature of matter than we tend to assume,” says Goff. “Eddington”—the English scientist who experimentally confirmed Einstein’s theory of general relativity in the early 20th century—“argued there’s a gap in our picture of the universe. We know what matter does but not what it is. We can put consciousness into this gap.”

In Eddington’s view, Goff writes in an email, it’s “”silly” to suppose that that underlying nature has nothing to do with consciousness and then to wonder where consciousness comes from.” Stephen Hawking has previously asked: “What is it that breathes fire into the equations and makes a universe for them to describe?” Goff adds: “The Russell-Eddington proposal is that it is consciousness that breathes fire into the equations.”

The biggest problem caused by panpsychism is known as the “combination problem”: Precisely how do small particles of consciousness collectively form more complex consciousness? Consciousness may exist in all particles, but that doesn’t answer the question of how these tiny fragments of physical consciousness come together to create the more complex experience of human consciousness.

Any theory that attempts to answer that question, would effectively determine which complex systems—from inanimate objects to plants to ants—count as conscious.

An alternative panpsychist perspective holds that, rather than individual particles holding consciousness and coming together, the universe as a whole is conscious. This, says Goff, isn’t the same as believing the universe is a unified divine being; it’s more like seeing it as a “cosmic mess.” Nevertheless, it does reflect a perspective that the world is a top-down creation, where every individual thing is derived from the universe, rather than a bottom-up version where objects are built from the smallest particles. Goff believes quantum entanglement—the finding that certain particles behave as a single unified system even when they’re separated by such immense distances there can’t be a causal signal between them—suggests the universe functions as a fundamental whole rather than a collection of discrete parts.

Such theories sound incredible, and perhaps they are. But then again, so is every other possible theory that explains consciousness. “The more I think about [any theory], the less plausible it becomes,” says Chalmers. “One starts as a materialist, then turns into a dualist, then a panpsychist, then an idealist,” he adds, echoing his paper on the subject. Idealism holds that conscious experience is the only thing that truly exists. From that perspective, panpsychism is quite moderate.

Chalmers quotes his colleague, the philosopher John Perry, who says: “If you think about consciousness long enough, you either become a panpsychist or you go into administration.”

https://qz.com/1184574/the-idea-that-everything-from-spoons-to-stones-are-conscious-is-gaining-academic-credibility/
"Quem for brasileiro, siga-me." Duque de Caxias

"Vamos mudar isso aí. Tá OK?" Capitão Mito Bolsonaro

Offline Pedro Reis

  • Nível 34
  • *
  • Mensagens: 2.641
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1315 Online: 02 de Fevereiro de 2018, 01:10:47 »
O Criso vai amar esse artigo.

Algumas considerações são as mesmas que eu faço há tempos aqui, mas a conjectura é louca. Se uma colher está consciente, faço uma pergunta: se quebrarmos esta colher em duas partes teremos imediatamente duas consciências?

Eu sou uma única consciência em cada único instante de tempo, mas a unidade das coisas é uma ilusão criada pela mente para dar forma à realidade. Não existe UMA colher a qual poderia estar associada uma consciência.

Uma árvore tem consciência? Ou cada folha da árvore é consciente, e também o tronco e cada raiz? Quando a folha seca cai no outono, ao se desprender ela se torna uma unidade e adquire sua própria consciência ao não ser mais parte da árvore?

O universo ser um único todo, a única real unidade, pode fazer sentido, mas se falamos de consciência, eu sei - e esta é uma das poucas certezas que posso ter - que a minha consciência é uma unidade destacada de qualquer suposta consciência universal, mesmo que de alguma forma ela esteja interagindo e recebendo informações dessa supra consciência. Então não dá pra ver como se pode extrair dessa ideia que partículas de consciência se combinem para formar "consciências mais complexas", como partes de um sistema. A experiência subjetiva não é experiência que aconteça dividida em partes, não é um sistema. Aparentemente tudo indica que, comparar a consciência individualizada de um indivíduo ( que é o que faz dele efetivamente um indivíduo ) ao sistema de bilhões de células que são seres vivos interagindo em coordenação para ser o organismo ser vivo deste indivíduo,  é uma falsa analogia.

Enquanto não for provado o contrário eu prefiro considerar a hipótese mais simples e menos mágica que, de alguma forma, sistemas nervosos gerem a capacidade de experiência subjetiva.
« Última modificação: 02 de Fevereiro de 2018, 01:21:41 por Pedro Reis »

Offline Buckaroo Banzai

  • Nível Máximo
  • *
  • Mensagens: 36.028
  • Sexo: Masculino
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1316 Online: 02 de Fevereiro de 2018, 20:38:08 »
Não li o texto postado pelo Gigaview, mas acho que a idéia não é tão literal. Para começar, "consciência" não seria aquilo que nós temos pelo estado de não-inconsciência, mas algo mais básico e generalizado, que constituiria a consciência animal organizada para lidar com estímulos do ambiente. E isto por sua vez se daria por "de alguma forma, sistemas nervosos gerem a capacidade de experiência subjetiva."

A diferença é estar se postulando uma coisa mais fundamental na natureza a partir da qual sistemas nervosos suficientemente complexos fazem isso, em vez deles conseguirem fazê-lo completamente "do nada". Algo meio como acidez é para digestão, acho. Algo a que se possa eventualmente reduzir esse aspecto "fantasmagórico" da coisa, em vez disso ser uma "emergência absoluta" de algo que não tem qualquer base física, ou que tenha de alguma forma uma base física apenas aí.

Assim sendo, uma colher ainda não teria qualquer coisa como a nossa consciência, só uma "matéria prima" latente. Não sei se postulam que por qualquer motivo o agrupamento de átomos colher deveria ser "especial" com relação a qualquer parte arbitrária dela, ou mesmo parte da colher e da matéria ao seu redor. Talvez uma analogia seria um "canal fora do ar"; a faixa disponível para transmitir o sinal está lá, mas inutilizada, então não tem TV nenhuma recebendo isso. E não havendo nem a emissora em si, não tem nem o potencial para isso. Não é bem então como se o universo em si tivesse transmissões de televisão inerentemente. Apenas tem no rádio o suporte para isso.

Ou ao menos isso é mais ou menos uma das perspectivas, não necessariamente essa aí.

Offline Pedro Reis

  • Nível 34
  • *
  • Mensagens: 2.641
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1317 Online: 03 de Fevereiro de 2018, 10:26:37 »
Não li o texto postado pelo Gigaview, mas acho que a idéia não é tão literal. Para começar, "consciência" não seria aquilo que nós temos pelo estado de não-inconsciência, mas algo mais básico e generalizado, que constituiria a consciência animal organizada para lidar com estímulos do ambiente. E isto por sua vez se daria por "de alguma forma, sistemas nervosos gerem a capacidade de experiência subjetiva."

A diferença é estar se postulando uma coisa mais fundamental na natureza a partir da qual sistemas nervosos suficientemente complexos fazem isso, em vez deles conseguirem fazê-lo completamente "do nada". [...]

Sim, isso é praticamente o mesmo tipo de conjectura que fiz no meu último "post" no tópico de inteligência artificial. É o que faz mais sentido pra mim: que deva existir algum aspecto físico intrínseco da estrutura do universo associado ao fenômeno de ocorrência de consciência ( creio que essa suposta propriedade seja ainda desconhecida ). De maneira semelhante é postulado que o misterioso fenômeno da gravidade seja consequência de um aspecto intrínseco do universo. A saber, a maneira peculiar como porções de matéria-energia deformariam a estrutura do espaço-tempo.

Se a gente parar pra pensar, a gravidade também é um fenômeno bastante difícil de ser apreendido pela mente humana. Tanto que, antes do séc. 17, ninguém nem mesmo havia se apercebido deste fenômeno.

A faculdade da auto-consciência de alguns seres vivos foi sempre fato óbvio, e da mesma forma também ninguém jamais ignorou que tudo aquilo que era lançado para o alto retornava ao chão. Porém não se postulava - e muito menos se identificava - uma propriedade física causal para este fenômeno.

Quando Newton sugeriu que partículas de matéria geravam forças de ação a distância sobre outras partículas ( na forma de pares ação-reação ), e que estas forças teriam efeito instantâneo a despeito de quão afastadas estivessem estas mesmas partículas, ele próprio reconheceu que esta ideia tinha um quê de absurdo. A gravidade de Newton era um fenômeno da mesma natureza que o produzido pelo choque entre duas partículas, contudo ocorria sem nenhum tipo de interação e ele mesmo achou isso incompreensível.

Todavia propor que o espaço pudesse ser algo em si mesmo com a propriedade de ser plasticamente afetado pela matéria/energia, era o tipo de especulação além do alcance para qualquer sábio em meados do séc. 17. Essa impossibilidade derivava da mente humana não ter sido "projetada" para criar uma representação desse aspecto da realidade objetiva que é o espaço, como tendo características de algo que existe em si mesmo. Diferentemente de como a mente conceitua naturalmente os objetos contidos no espaço como sendo "coisas", a sua realidade intuitiva conceitua o espaço como sendo nada. Mas hoje sabemos que a "realidade" construída pela mente é uma ilusão, uma representação imperfeita e com muitas limitações daquilo que deva ser a realidade objetiva. E alcança-se esta constatação pela razão; quando ao raciocínio lógico se apresentam fenômenos que não se ajustam aos nossos modelos de realidade intuitivamente construídos.

Portanto, apenas quando estas contradições se tornam conhecidas pela observação dos fenômenos, é que a razão consegue conceber ( é obrigada a conceber ) modelos da realidade contra-intuitivos capazes de previsões mais acuradas dos fatos da realidade objetiva. Ainda assim a sua mente não pode transcender a si mesma, e continua presa ao seu (dela) modelo inerente de realidade. A razão apenas lhe concede a capacidade de compreender que este modelo é imperfeito ou ilusório.

Assim conceitos da chamada Física Moderna não poderiam ter sido propostos antes do desenvolvimento da Física Clássica, porque foi esta última que tornou possível o conhecimento exatamente daqueles fenômenos que iriam expor as inconsistências da antiga ciência.

Pode ser que a Física capaz de criar modelos que, se não explicarem, pelo menos poderão prever
acuradamente os comportamentos dos fenômenos envolvidos na produção da consciência, ainda esteja por nascer, a espera da observação de fatos que contrariem previsões dos modelos obtidos com base no conhecimento corrente. A Ciência não é um empreendimento terminado, podemos ainda estar apenas no começo da caminhada.

Como exemplo, não faz muito tempo, todas as previsões sobre a evolução do universo só consideravam a possibilidade da sua expansão estar sendo "freada" pela gravidade. A observação surpreendeu revelando que a expansão é acelerada, e só a partir dessa descoberta é que puderam surgir "insights" sobre como possa ter se originado o Big Bang.

Nós só vamos poder dizer que há uma abordagem realmente científica do problema difícil da consciência quando esta Ciência for capaz de elaborar modelos cujas previsões permitam construir sistemas artificialmente conscientes. Até lá quase tudo nessa discussão é inescapável do "bullshit filosófico" com muitas pitadas de metafísica.

Então, claro, este artigo aí também é filosofia bullshit, e como filosofias são um pouco questão de gosto, eu não gostei muito desta. Os trechos que não compreendi muito bem são os seguintes:

Citar
These particles then come together to form more complex forms of consciousness, such as humans’ subjective experiences.

Citar
But, then again, panpsychism could very well imply that conscious tables exist: One interpretation of the theory holds that “any system is conscious,” says Chalmers. “Rocks will be conscious, spoons will be conscious, the Earth will be conscious. Any kind of aggregation gives you consciousness.”

A filosofia aí em questão parece estar chutando, baseada em absolutamente nada observável, que o simples agregar de partículas produz sistemas conscientes mais complexos e/ou superiores, o que me leva a questionar se partir um tijolo consciente terá como resultado duas partes inferiormente conscientes.

Retomando a comparação com outro mistério que é a gravidade, os efeitos gravitacionais de sistemas de matéria formados por agregações de partículas são observáveis e previstos com precisão pela teoria. O que reveste esta teoria de algo mais do que mera filosofia.

Mas assim como qualquer agregar de moléculas não gera nenhum grau de vida, mesmo os sistemas biológicos sendo uma das possibilidades inerentes da estrutura do universo, mas somente sistemas baseados em estruturas carbônicas muito específicas puderam dar origem ao desenvolvimento da vida neste planeta, somente algo de muito específico da natureza peculiar
de sistemas nervosos deve estar envolvido como causa no fenômeno da experiência subjetiva. Mas é claro que estas também não passam de considerações filosóficas minhas, porque até agora não há nenhuma teoria sobre consciência que atenda ao requisito fundamental da falseabilidade. O que equivale, de fato, a reconhecer que não há ainda nenhuma Teoria da Consciência.

Offline Gigaview

  • Nível Máximo
  • *
  • Mensagens: 13.790
  • "Minha espada não tem partidos."
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1318 Online: 03 de Fevereiro de 2018, 21:18:48 »
Citar
Panpsychism is crazy, but it’s also most probably true
Philip Goff

is associate professor in philosophy at the Central European University in Budapest. His research interest is in consciousness and he blogs at Conscience and Consciousness.

Common sense tells us that only living things have an inner life. Rabbits and tigers and mice have feelings, sensations and experiences; tables and rocks and molecules do not. Panpsychists deny this datum of common sense. According to panpsychism, the smallest bits of matter – things such as electrons and quarks – have very basic kinds of experience; an electron has an inner life.

The main objection made to panpsychism is that it is ‘crazy’ and ‘just obviously wrong’. It is thought to be highly counterintuitive to suppose that an electron has some kind of inner life, no matter how basic, and this is taken to be a very strong reason to doubt the truth of panpsychism. But many widely accepted scientific theories are also crazily counter to common sense. Albert Einstein tells us that time slows down at high speeds. According to standard interpretations of quantum mechanics, particles have determinate positions only when measured. And according to Charles Darwin’s theory of evolution, our ancestors were apes. All of these views are wildly at odds with our common-sense view of the world, or at least they were when they were first proposed, but nobody thinks this is a good reason not to take them seriously. Why should we take common sense to be a good guide to how things really are?

No doubt the willingness of many to accept special relativity, natural selection and quantum mechanics, despite their strangeness from the point of view of pre-theoretical common sense, is a reflection of their respect for the scientific method. We are prepared to modify our view of the world if we take there to be good scientific reason to do so. But in the absence of hard experimental proof, people are reluctant to attribute consciousness to electrons.

Yet scientific support for a theory comes not merely from the fact that it explains the evidence, but from the fact that it is the best explanation of the evidence, where a theory is ‘better’ to the extent that it is more simple, elegant and parsimonious than its rivals. Suppose we have two theories – Theory A and Theory B – both of which account for all observations, but Theory A postulates four kinds of fundamental force while Theory B postulates 15 kinds of fundamental force. Although both theories account for all the data of observation, Theory A is to be preferred as it offers a more parsimonious account of the data. To take a real-world example, Einstein’s theory of special relativity supplanted the Lorentzian theory that preceded it, not because Einstein’s theory accounted for any observations that the Lorentzian theory could not account for, but because Einstein provided a much simpler and more elegant explanation of the relevant observations.

I maintain that there is a powerful simplicity argument in favour of panpsychism. The argument relies on a claim that has been defended by Bertrand Russell, Arthur Eddington and many others, namely that physical science doesn’t tell us what matter is, only what it does. The job of physics is to provide us with mathematical models that allow us to predict with great accuracy how matter will behave. This is incredibly useful information; it allows us to manipulate the world in extraordinary ways, leading to the technological advancements that have transformed our society beyond recognition. But it is one thing to know the behaviour of an electron and quite another to know its intrinsic nature: how the electron is, in and of itself. Physical science gives us rich information about the behaviour of matter but leaves us completely in the dark about its intrinsic nature.

In fact, the only thing we know about the intrinsic nature of matter is that some of it – the stuff in brains – involves experience. We now face a theoretical choice. We either suppose that the intrinsic nature of fundamental particles involves experience or we suppose that they have some entirely unknown intrinsic nature. On the former supposition, the nature of macroscopic things is continuous with the nature of microscopic things. The latter supposition leads us to complexity, discontinuity and mystery. The theoretical imperative to form as simple and unified a view as is consistent with the data leads us quite straightforwardly in the direction of panpsychism.

In the public mind, physics is on its way to giving us a complete picture of the nature of space, time and matter. While in this mindset, panpsychism seems improbable, as physics does not attribute experience to fundamental particles. But once we realise that physics tells us nothing about the intrinsic nature of the entities it talks about, and indeed that the only thing we know for certain about the intrinsic nature of matter is that at least some material things have experiences, the issue looks very different. All we get from physics is this big black-and-white abstract structure, which we must somehow colour in with intrinsic nature. We know how to colour in one bit of it: the brains of organisms are coloured in with experience. How to colour in the rest? The most elegant, simple, sensible option is to colour in the rest of the world with the same pen.

Panpsychism is crazy. But it is also highly likely to be true.
https://aeon.co/ideas/panpsychism-is-crazy-but-its-also-most-probably-true
"Quem for brasileiro, siga-me." Duque de Caxias

"Vamos mudar isso aí. Tá OK?" Capitão Mito Bolsonaro

Offline Gigaview

  • Nível Máximo
  • *
  • Mensagens: 13.790
  • "Minha espada não tem partidos."
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1319 Online: 03 de Fevereiro de 2018, 22:00:52 »
Citar
Conscious exotica

From algorithms to aliens, could humans ever understand minds that are radically unlike our own?

Illustration by Richard Wilkinson

Murray Shanahan is professor of cognitive robotics at Imperial College London and a Spoke Leader at the Leverhulme Centre for the Future of Intelligence. His latest book is The Technological Singularity (2015).

In 1984, the philosopher Aaron Sloman invited scholars to describe ‘the space of possible minds’. Sloman’s phrase alludes to the fact that human minds, in all their variety, are not the only sorts of minds. There are, for example, the minds of other animals, such as chimpanzees, crows and octopuses. But the space of possibilities must also include the minds of life-forms that have evolved elsewhere in the Universe, minds that could be very different from any product of terrestrial biology. The map of possibilities includes such theoretical creatures even if we are alone in the Cosmos, just as it also includes life-forms that could have evolved on Earth under different conditions.

We must also consider the possibility of artificial intelligence (AI). Let’s say that intelligence ‘measures an agent’s general ability to achieve goals in a wide range of environments’, following the definition adopted by the computer scientists Shane Legg and Marcus Hutter. By this definition, no artefact exists today that has anything approaching human-level intelligence. While there are computer programs that can out-perform humans in highly demanding yet specialised intellectual domains, such as playing the game of Go, no computer or robot today can match the generality of human intelligence.

But it is artefacts possessing general intelligence – whether rat-level, human-level or beyond – that we are most interested in, because they are candidates for membership of the space of possible minds. Indeed, because the potential for variation in such artefacts far outstrips the potential for variation in naturally evolved intelligence, the non-natural variants might occupy the majority of that space. Some of these artefacts are likely to be very strange, examples of what we might call ‘conscious exotica’.

In what follows I attempt to meet Sloman’s challenge by describing the structure of the space of possible minds, in two dimensions: the capacity for consciousness and the human-likeness of behaviour. Implicit in this mapping seems to be the possibility of forms of consciousness so alien that we would not recognise them. Yet I am also concerned, following Ludwig Wittgenstein, to reject the dualistic idea that there is an impenetrable realm of subjective experience that forms a distinct portion of reality. I prefer the notion that ‘nothing is hidden’, metaphysically speaking. The difficulty here is that accepting the possibility of radically inscrutable consciousness seemingly readmits the dualistic proposition that consciousness is not, so to speak, ‘open to view’, but inherently private. I try to show how we might avoid that troubling outcome.

Thomas Nagel’s celebrated treatment of the (modestly) exotic subjectivity of a bat is a good place to start. Nagel wonders what it is like to be a bat, and laments that ‘if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task’. A corollary of Nagel’s position is that certain kinds of facts – namely facts that are tied to a very different subjective point of view – are inaccessible to our human minds. This supports the dualist’s claim that no account of reality could be complete if it comprised only objective facts and omitted the subjective. Yet I think the dualistic urge to cleave reality in this way is to be resisted. So, if we accept Nagel’s reasoning, conscious exotica present a challenge.

But bats are not the real problem, as I see it. The moderately exotic inner lives of non-human animals present a challenge to Nagel only because he accords ontological status to an everyday indexical distinction. I cannot be both here and there. But this platitude does not entail the existence of facts that are irreducibly tied to a particular position in space. Similarly, I cannot be both a human and a bat. But this does not entail the existence of phenomenological facts that are irreducibly tied to a particular subjective point of view. We should not be fooled by the presence of the word ‘know’ into seeing the sentence ‘as a human, I cannot know what it’s like to be a bat’ as expressing anything more philosophically puzzling than the sentence ‘I am a human, not a bat.’ We can always speculate about what it might be like to be a bat, using our imaginations to extend our own experience (as Nagel does). In doing so, we might remark on the limitations of the exercise. The mistake is to conclude, with Nagel, that there must be facts of the matter here, certain subjective ‘truths’, that elude our powers of collective investigation.

To explore the space of possible minds is to entertain the possibility of beings far more exotic than any terrestrial species

In this, I take my cue from the later Wittgenstein of The Philosophical Investigations (1953). The principle that underlies Wittgenstein’s rejection of private language – a language with words for sensations that only one person in the world could understand – is that we can talk only about what lies before us, what is public, what is open to collective view. As for anything else, well, ‘a nothing would serve as well as a something about which nothing can be said’. A word that referred to a private, inner sensation would have no useful function in our language. Of course, things can be hidden in a practical sense, like a ball beneath a magician’s cup, or a star that is outside our light cone. But nothing is beyond reach metaphysically speaking. When it comes to the inner lives of others, there is always more to be revealed – by interacting with them, by observing them, by studying how they work – but it makes no sense to speak as if there were something over and above what can ever be revealed. 

Following this train of thought, we should not impute unknowable subjectivity to other people (however strange), to bats or to octopuses, nor indeed to extra-terrestrials or to artificial intelligences. But here is the real problem, namely radically exotic forms of consciousness. Nagel reasonably assumes that ‘we all believe bats have experience’; we might not know what it is like to be a bat, yet we presume it is like something. But to explore the space of possible minds is to entertain the possibility of beings far more exotic than any terrestrial species. Could the space of possible minds include beings so inscrutable that we could not tell whether they had conscious experiences at all? To deny this possibility smacks of biocentrism. Yet to accept it is to flirt once more with the dualistic thought that there is a hidden order of subjective facts. In contrast to the question of what it is like to be an X, surely (we are tempted to say) there is a fact of the matter when it comes to the question of whether it is like anything at all to be an X. Either a being has conscious experience or it does not, regardless of whether we can tell.

Consider the following thought experiment. Suppose I turn up to the lab one morning to discover that a white box has been delivered containing an immensely complex dynamical system whose workings are entirely open to view. Perhaps it is the gift of a visiting extraterrestrial, or the unwanted product of some rival AI lab that has let its evolutionary algorithms run amok and is unsure what to do with the results. Suppose I have to decide whether or not to destroy the box. How can I know whether that would be a morally acceptable action? Is there any method or procedure by means of which I could determine whether or not consciousness was, in some sense, present in the box?

One way to meet this challenge would be to devise an objective measure of consciousness, a mathematical function that, given any physical description, returns a number that quantifies the consciousness of that system. The neuroscientist Giulio Tononi has purported to supply just such a measure, named Φ, within the rubric of so-called ‘integrated information theory’. Here, Φ describes the extent to which a system is, in a specific information-theoretic sense, more than the sum of its parts. For Tononi, consciousness is Φ in much the same sense that water is H2O. So, integrated information theory claims to supply both necessary and sufficient conditions for the presence of consciousness in any given dynamical system.

The chief difficulty with this approach is that it divorces consciousness from behaviour. A completely self-contained system can have high Φ despite having no interactions with anything outside itself. Yet our everyday concept of consciousness is inherently bound up with behaviour. If you remark to me that someone was or was not aware of something (an oncoming car, say, or a friend passing in the corridor) it gives me certain expectations about their behaviour (they will or won’t brake, they will or won’t say hello). I might make similar remarks to you about what I was aware of in order to account for my own behaviour. ‘I can hardly tell the difference between those two colours’; ‘I’m trying to work out that sum in my head, but it’s too hard’; ‘I’ve just remembered what she said’; ‘It doesn’t hurt as much now’ – all these sentences help to explain my behaviour to fellow speakers of my language and play a role in our everyday social activity. They help us keep each other informed about what we have done in the past, are doing right now or are likely to do in the future.

It’s only when we do philosophy that we start to speak of consciousness, experience and sensation in terms of private subjectivity. This is the path to the hard problem/easy problem distinction set out by David Chalmers, to a metaphysically weighty division between inner and outer – in short, to a form of dualism in which subjective experience is an ontologically distinct feature of reality. Wittgenstein provides an antidote to this way of thinking in his remarks on private language, whose centrepiece is an argument to the effect that, insofar as we can talk about our experiences, they must have an outward, public manifestation. For Wittgenstein, ‘only of a living human being and what resembles (behaves like) a living human being can one say: it has sensations, it sees … it is conscious or unconscious’.

Through Wittgenstein, we arrive at the following precept: only against a backdrop of purposeful behaviour do we speak of consciousness. By these lights, in order to establish the presence of consciousness, it would not be sufficient to discover that a system, such as the white box in our thought experiment, had high Φ. We would need to discern purpose in its behaviour. For this to happen, we would have to see the system as embedded in an environment. We would need to see the environment as acting on the system, and the system as acting on the environment for its own ends. If the ‘system’ in question was an animal, then we already inhabit the same, familiar environment, notwithstanding that the environment affords different things to different creatures. But to discern purposeful behaviour in an unfamiliar system (or creature or being), we might need to engineer an encounter with it.

Even in familiar instances, this business of engineering an encounter can be tricky. For example, in 2006 the neuroscientist Adrian Owen and his colleagues managed to establish a simple form of communication with vegetative-state patients using an fMRI scanner. The patients were asked to imagine two different scenarios that are known to elicit distinct fMRI signatures in healthy individuals: walking through a house and playing tennis. A subset of vegetative-state patients generated appropriate fMRI signatures in response to the relevant verbal instruction, indicating that they could understand the instruction, had formed the intention to respond to it, and were able to exercise their imagination. This must count as ‘engineering an encounter’ with the patient, especially when their behaviour is interpreted against the backdrop of the many years of normal activity the patient displayed when healthy.

We don’t weigh up the evidence to conclude that our friends are probably conscious creatures. We simply see them that way, and treat them accordingly

Once we have discerned purposeful behaviour in our object of study, we can begin to observe and (hopefully) to interact with it. As a result of these observations and interactions, we might decide that consciousness is present. Or, to put things differently, we might adopt the sort of attitude towards it that we normally reserve for fellow conscious creatures.

The difference between these two forms of expression is worth dwelling on. Implicit in the first formulation is the assumption that there is a fact of the matter. Either consciousness is present in the object before us or it is not, and the truth can be revealed by a scientific sort of investigation that combines the empirical and the rational. The second formulation owes its wording to Wittgenstein. Musing on the skeptical thought that a friend could be a mere automaton – a phenomenological zombie, as we might say today – Wittgenstein notes that he is not of the opinion that his friend has a soul. Rather, ‘my attitude towards him is an attitude towards a soul’. (For ‘has a soul’ we can read something like ‘is conscious and capable of joy and suffering’.) The point here is that, in everyday life, we do not weigh up the evidence and conclude, on balance, that our friends and loved ones are probably conscious creatures like ourselves. The matter runs far deeper than that. We simply see them that way, and treat them accordingly. Doubt plays no part in our attitude towards them.

How do these Wittgensteinian sensibilities play out in the case of beings more exotic than humans or other animals? Now we can reformulate the white box problem of whether there is a method that can determine if consciousness, in some sense, is present in the box. Instead, we might ask: under what circumstances would we adopt towards this box, or any part of it, the sort attitude we normally reserve for a fellow conscious creature?

Let’s begin with a modestly exotic hypothetical case, a humanoid robot with human-level artificial intelligence: the robot Ava from the film Ex Machina (2015), written and directed by Alex Garland.

In Ex Machina, the programmer Caleb is taken to the remote retreat of his boss, the reclusive genius and tech billionaire Nathan. He is initially told he is to be the human component in a Turing Test, with Ava as the subject. After his first meeting with Ava, Caleb remarks to Nathan that in a real Turing Test the subject should be hidden from the tester, whereas Caleb knows from the outset that Ava is a robot. Nathan retorts that: ‘The real test is to show you she is a robot. Then see if you still feel she has consciousness.’ (We might call this the ‘Garland Test’.) As the film progresses and Caleb has more opportunities to observe and interact with Ava, he ceases to see her as a ‘mere machine’. He begins to sympathise with her plight, imprisoned by Nathan and faced with the possibility of ‘termination’ if she fails his test. It’s clear by the end of the film that Caleb’s attitude towards Ava has evolved into the sort we normally reserve for a fellow conscious creature.

The arc of Ava and Caleb’s story illustrates the Wittgenstein-inspired approach to consciousness. Caleb arrives at this attitude not by carrying out a scientific investigation of the internal workings of Ava’s brain but by watching her and talking to her. His stance goes deeper than any mere opinion. In the end, he acts decisively on her behalf and at great risk to himself. I do not wish to imply that scientific investigation should not influence the way we come to see another being, especially in more exotic cases. The point is that the study of a mechanism can only complement observation and interaction, not substitute for it. How else could we truly come to see another conscious being as such, other than by inhabiting its world and encountering it for ourselves?

If something is built very differently to us, then however human-like its behaviour, its consciousness might be very different to ours

The situation is seemingly made simpler for Caleb because Ava is only a moderately exotic case. Her behaviour is very human-like, and she has a humanoid form (indeed, a female humanoid form that he finds attractive). But the fictional Ava also illustrates how tricky even seemingly straightforward cases can be. In the published script, there is a direction for the last scene of the film that didn’t make the final cut. It reads: ‘Facial recognition vectors flutter around the pilot’s face. And when he opens his mouth to speak, we don’t hear words. We hear pulses of monotone noise. Low pitch. Speech as pure pattern recognition. This is how Ava sees us. And hears us. It feels completely alien.’ This direction brings out the ambiguity that lies at the heart of the film. Our inclination, as viewers, is to see Ava as a conscious creature capable of suffering – as Caleb sees her. Yet it is tempting to wonder whether Caleb is being fooled, whether Ava might not be conscious after all, or at least not in any familiar sense.

This is a seductive line of thinking. But it should be entertained with extreme caution. It is a truism in computer science that specifying how a system behaves does not determine how that behaviour need be implemented in practice. In reality, human-level artificial intelligence exhibiting human-like behaviour might be instantiated in a number of different ways. It might not be necessary to copy the architecture of the biological brain. On the other hand, perhaps consciousness does depend on implementation. If a creature’s brain is like ours, then there are grounds to suppose that its consciousness, its inner life, is also like ours. Or so the thought goes. But if something is built very differently to us, with a different architecture realised in a different substrate, then however human-like its behaviour, its consciousness might be very different to ours. Perhaps it would be a phenomenological zombie, with no consciousness at all.

The trouble with this thought is the pull it exerts towards the sort of dualistic metaphysical picture we are trying to dispense with. Surely, we cry, there must be a fact of the matter here? Either the AI in question is conscious in the sense you and I are conscious, or it is not. Yet seemingly we can never know for sure which it is. It is a small step from here to the dualistic intuition that a private and subjective world of inner experience exists separately from the public and objective world of physical objects. But there is no need to yield to this dualistic intuition. Neither is there any need to deny it. It is enough to note that, in difficult cases, it is always possible to find out more about an object of study – to observe its behaviour under a wider set of circumstances, to interact with it in new ways, to investigate its workings more thoroughly. As we find out more, the way we treat it and talk about it will change, and in this way we will converge on the appropriate attitude to take towards it. Perhaps Caleb’s attitude to Ava would have changed if he’d had more time to interact with her, to find out what really made her tick. Or perhaps not.

So far, we have stuck to human-like entities and haven’t looked at anything especially exotic. But we need to extend our field of vision if we are to map out the space of possible minds. This affords the opportunity to think imaginatively about properly exotic beings, and to speculate about their putative consciousness.

There are various dimensions along which we might plot the many kinds of minds we can imagine. I have chosen two: human-likeness (the H-axis) and capacity for consciousness (the C-axis). An entity is human-like to the extent that it makes sense to describe its behaviour using the language we normally employ to describe humans – the language of beliefs, desires, emotions, needs, skills and so on. A brick, by this definition, scores very low. For very different reasons, an exotic entity might also score very low on human-likeness, if its behaviour were inscrutably complex or alien. On the C-axis, an entity’s capacity for consciousness corresponds to the richness of experience it is capable of. A brick scores zero on this axis (panpsychism notwithstanding), while a human scores significantly more than a brick.

Figure 1 below tentatively places a number of animals in the H-C plane, along axes that range from 0 (minimum) to 10 (maximum). A brick is shown at the (0, 0) position. Let’s consider the placement along the C-axis. There is no reason to suppose a human’s capacity for consciousness could not be exceeded by some other being. So humans (perhaps generously) are assigned 8 on this axis. The topic of animal consciousness is fraught with difficulty. But a commonplace assumption is that, in terrestrial biology at least, consciousness is closely related to cognitive prowess. In line with this intuition, a bee is assumed to have a smaller capacity for consciousness than a cat, which in turn has a slightly smaller capacity for consciousness than an octopus, while all three of those animals score less than a human being. Arranging animals this way can be justified by appealing to the range of capabilities studied in the field of animal cognition. These include associative learning, physical cognition, social cognition, tool use and manufacture, mental time travel (including future planning and episodic-like memory) and communication. An animal’s experience of the world is presumed to be enriched by each of these capabilities. For humans, we can add language, the capacity to form abstract concepts and the ability to think associatively in images and metaphors, among others.


Figure 1. Top: biology on the H-C Plane. Below: contemporary AI on the H-C Plane


Now let’s turn our attention to the H-axis. Tautologically, a human being has maximum human-likeness. So we get 10 on the H-axis. All non-human animals share certain fundamentals with humans. All animals are embodied, move and sense the world, and exhibit purposeful behaviour. Moreover, every animal has certain bodily needs in common with humans, such as food and water, and every animal tries to protect itself from harm and to survive. To this extent, all animals exhibit human-like behaviour, so all animals get 3 or more on the H-axis. Now, in order to describe and explain the behaviour of a non-human animal, we have recourse to the concepts and language we use to describe and explain human behaviour. An animal’s behaviour is said to be human-like to the extent that these conceptual and linguistic resources are necessary and sufficient to describe and explain it. And the more cognitively sophisticated a species is, the more of these linguistic and conceptual resources are typically required. So the cat and the octopus are higher up the H-axis than the bee, but lower than the human.

It is, of course, naïve to assign a simple scalar to a being’s capacity for consciousness

Under the assumptions we’re making, human-likeness and the capacity for consciousness are broadly correlated for animals. However, the octopus appears lower down the H-axis than the cat, despite being further along on the C-axis. I don’t want to defend these relative orderings specifically. But the octopus exemplifies the possibility of a creature that is cognitively sophisticated, that we are inclined to credit with a capacity for rich conscious experiences, but whose behaviour is hard for humans to understand. Taking this idea further, we can imagine conscious beings far more inscrutable than an octopus. Such beings would appear down there with the brick on the H-axis, but for very different reasons. To describe and explain the behaviour of a brick, the elaborate concepts we use to describe and explain human behaviour are unnecessary, since it exhibits none to speak of. But to describe and explain the behaviour of a cognitively sophisticated but inscrutable being, those resources would be insufficient.

There is plenty to take issue with in these designations. It is, of course, naïve to assign a simple scalar to a being’s capacity for consciousness. A more nuanced approach would be sensitive to the fact that different combinations of cognitive capabilities are present in different animals. Moreover, the extent to which each of these capabilities contributes to the richness of a creature’s experience is open to debate. Similar doubts can be cast on the validity of the H-axis. But the H-C plane should be viewed as a canvas on which crude, experimental sketches of the space of possible minds can be made, a spur to discussion rather than a rigorous theoretical framework. Furthermore, diagrams of the H-C plane are not attempts to portray facts of the matter with respect to the consciousness of different beings. Rather, they are speculative attempts to anticipate the consensual attitude we might arrive at about the consciousness of various entities, following a collective process of observation, interaction, debate, discussion and investigation of their inner workings.

"Quem for brasileiro, siga-me." Duque de Caxias

"Vamos mudar isso aí. Tá OK?" Capitão Mito Bolsonaro

Offline Gigaview

  • Nível Máximo
  • *
  • Mensagens: 13.790
  • "Minha espada não tem partidos."
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1320 Online: 03 de Fevereiro de 2018, 22:02:30 »

continuação...

Citar
Let’s put some contemporary examples of robotics and artificial intelligence on the H-C plane. These include Roomba (a domestic vacuum-cleaning robot), BigDog (a four-legged robot with life-like locomotion), and AlphaGo (the program created by Google DeepMind that defeated the champion Go player Lee Sedol in 2016). All three are pressed up to the far left of the C-axis. Indeed, no machine, no robot or computer program yet exists that could plausibly be ascribed any capacity for consciousness at all.

On the other hand, as far as human-likeness is concerned, all three are well above the brick. BigDog appears slightly below Roomba, both of which are slightly above AlphaGo. BigDog is guided by a human operator. However, it is capable of automatically adjusting to rough or slippery terrain, and of righting itself when its balance is upset, by being kicked, for example. In describing these aspects of its behaviour, it’s natural to use phrases such as ‘it’s trying not to fall over’ or even ‘it really wants to stay upright’. That is to say, we tend to adopt towards BigDog what Daniel Dennett calls the ‘intentional stance’, imputing beliefs, desires and intentions because this makes it easier to describe and explain its behaviour.

Unlike BigDog, Roomba is a fully autonomous robot that can operate for long periods without human intervention. Despite BigDog’s lifelike response to being kicked, the slightest understanding of its inner workings should dispel any inclination to see it as a living creature struggling against adversity. The same is true of Roomba. However, the behaviour of Roomba is altogether more complex, because it has an overarching mission, namely to keep the floor clean. Against the backdrop of such a mission, the intentional stance can be used in a far more sophisticated way, invoking an interplay of perception, action, belief, desire and intention. Not only are we inclined to say things such as: ‘It’s swerving to avoid the chair leg’, we might also say: ‘It’s returning to the docking station because its batteries are low’, or ‘It’s going over that patch of carpet again because it can tell that it’s really dirty.’

AlphaGo scores the lowest of the three artefacts we’re looking at, though not due to any lack in cognitive capabilities. Indeed, these are rather impressive, albeit in a very narrow domain. Rather, it is because AlphaGo’s behaviour can barely be likened to a human’s or an animal’s at all. Unlike BigDog and Roomba, it doesn’t inhabit the physical world or have a virtual surrogate in any relevant sense. It doesn’t perceive the world or move within it, and the totality of its behaviour is manifest through the moves it makes on the Go board. Nevertheless, the intentional stance is sometimes useful to describe its behaviour. Demis Hassabis, DeepMind’s co-founder, issued three telling tweets concerning the one game that AlphaGo lost to Sedol in the five-game series. In the first tweet, he wrote: ‘#AlphaGo thought it was doing well, but got confused on move 87.’ He went on to say: ‘Mistake was on move 79, but #AlphaGo only came to that realisation on around move 87.’ Shortly afterwards he tweeted: ‘When I say “thought” and “realisation” I just mean the output of #AlphaGo’s value net. It was around 70 per cent at move 79 and then dived on move 87.’

‘It’s not a human move. I’ve never seen a human play this move. So beautiful’

To anyone unfamiliar with AlphaGo’s inner workings, the first two tweets would have made far more sense than the scientifically more accurate statement in the third. However, this is a shallow use of the intentional stance, which is ultimately of little help in understanding AlphaGo. It does not interact with a world of spatiotemporally located objects, and there is no fruitful sense in which its behaviour can be characterised in terms of the interplay of perception, belief, desire, intention and action.

On the other hand, it deploys a formidable set of cognitive skills within the microworld of Go. It learns through experience, garnered through self-play as well as from records of human games. It can search through a myriad possible plays to determine its next move. Its ability to respond effectively to subtle board patterns replicates what is often called intuition in top human players. And in one extraordinary move during the Sedol match, it displayed a form of what we might call creativity. It ventured into the fifth line of the Go board using a move known as a shoulder hit, in which a stone is placed diagonally adjacent to an opponent’s stone. Commentating on the match, the European Go champion Fan Hui remarked: ‘It’s not a human move. I’ve never seen a human play this move. So beautiful.’ According to AlphaGo’s own estimate, there was a one-in-10,000 chance that a human would have used the same tactic, and it went against centuries of received wisdom. Yet this move was pivotal in giving it victory.

What we find in AlphaGo is an example of what we might term, not ‘conscious exotica’, but rather a form of ‘cognitive exotica’. Through a process largely opaque to humans, it manages to attain a goal that might have been considered beyond its abilities. AlphaGo’s prowess is confined to Go, and we are a long way from artificial general intelligence. However, it’s natural to wonder about the possible forms that artificial general intelligence might take – and how they could be distributed within the space of possible minds.

So far we have looked at the human-likeness and capacity for consciousness of various real entities, both natural and artificial. But in Figure 2 below, a number of hypothetical beings are placed on the H-C plane. Obviously, this is wildly speculative. Only through an actual encounter with an unfamiliar creature could we truly discover our attitude towards it and how our language would adapt and extend to accommodate it. Nevertheless, guided by reason, the imagination can tell us something about the different sorts of entity that might populate the space of possible minds. 



Figure 2. Exotica on the H-C plane

Take some possible forms of human-level artificial general intelligence (AGI), such as an AI built to mimic exactly the neural processing in the human brain. This could be achieved by copying the brain of a specific individual – scanning its structure in nanoscopic detail, replicating its physical behaviour in an artificial substrate, and embodying the result in a humanoid form. This process, known as ‘whole brain emulation’, would, in principle, yield something whose behaviour was indistinguishable from the  original. So, being perfectly human-like, this would be an example of an artificial general intelligence with a 10 on the H-axis. Alternatively, rather than copying a specific person, an artificial brain could be constructed that matched a statistical description of a typical newborn’s central nervous system. Duly embodied and reared like a human child, the result would be another perfectly human-like AGI.

Would these beings be conscious? Or rather, would we come to treat them the way we treat fellow conscious creatures, and would we describe them in the same terms? I conjecture that we would. Whatever prejudices we might start out with, their perfectly human-like behaviour would soon shape our feelings towards them to one of fellowship. So a human-like, conscious AGI is surely a possibility, and it would occupy the same spot on the H-C plane as a human.

But as we’ve already noted, there’s no reason to suppose that the only way to build a human-level artificial general intelligence is to copy the biological brain. Perhaps an entirely different architecture could implement the same result. (Ex Machina’s Ava is a fictional example.) It might be possible to reach human-level intelligence using some combination of brute force search techniques and machine learning with big data, perhaps exploiting senses and computational capacity unavailable to humans.

Such possibilities suggest several new kinds of being on the H-C plane. The first of these is the human-like zombie AI in the top left-hand corner. This entity not only has human-level intelligence, but is also thoroughly human-like in its behaviour, which can be described and explained using just the same language we use to describe human behaviour. However, it lacks consciousness. In Nagel’s terms, it isn’t like anything to be this thing. It is, in this sense, a phenomenological zombie.

Now, can we really imagine such a thing? Surely if its behaviour were indistinguishable from human behaviour, we would come to treat it in the way we treat each other. Surely, as we interacted with such beings, our attitude towards them would migrate towards fellowship, coming to see them as fellow conscious creatures and treating them as such. But suppose such an entity functioned merely by mimicking human behaviour. Through a future generation of very powerful machine-learning techniques, it has learned how to act in a convincingly human-like way in a huge variety of situations. If such an AGI says it is feeling sad, this is not because of a conflict between the way things are and the way it would like things to be, but rather because it has learned to say that it is sad in those particular circumstances. Would this alter our attitude? I conjecture that it would, that we would deny it consciousness, confining it to the left of the C-axis. 

We should entertain the likelihood that the richness of their conscious experiences would exceed human capacity

What sort of entity might be produced if someone – or most likely some corporation, organisation or government – set out to create an artificial successor to humankind, a being superior to homo sapiens? Whether idealistic, misguided or just plain crazy, they might reason that a future generation of artificial general intelligences could possess far greater intellectual powers than any human. Moreover, liberated from the constraints of biology, such beings could undertake long journeys into interstellar space that humans, with their fragile, short-lived bodies, would never survive. It would be AIs, then, who would go out to explore the wonders of the Universe up close. Because of the distances and timescales involved, the purpose of these AIs wouldn’t be to relay information back to their creators. Rather, they would visit the stars on humanity’s behalf. Let’s call such hypothetical beings our ‘mind children’, a term borrowed from the Austrian roboticist Hans Moravec.

Now, where would these mind children appear on the H-C plane? Well, with no one waiting for a message home, there would seem to be little point in sending an artefact out to the stars that lacked the ability to consciously experience what it found. So the creators of our mind children would perhaps go for a biologically inspired brain-like architecture, to ensure that they scored at least as well as humans on the C-axis. Indeed, we should entertain the likelihood that the richness of their conscious experiences would exceed human capacity, that they would enjoy a form of superconsciousness. This might be the case, for example, if they had a suite of sensors with a much larger bandwidth than a human’s, or if they were able to grasp complex mathematical truths that are beyond human comprehension, or if they could hold a vast network of associations in their minds at once while we humans are confined to just a few.

As for the H-axis, a brain-inspired blueprint would also confer a degree of human-likeness on the AI. However, its superintelligence would probably render it hard for humans to fully understand. It would perhaps get 6 or 7. In short, our superintelligent, superconscious, artificially intelligent progeny are to be found at the right-hand end of the diagram, somewhat more than halfway up the H-axis.

What about non-brain-like artificial general intelligence? AGIs of this kind suggest several new data points on the H-C plane, all located lower down on the H-axis. These are the truly exotic AGIs, that is, opposite to human-like. The behaviour of an exotic being cannot be understood – or at least not fully understood – using the terms we usually use to make sense of human behaviour. Such a being might exhibit behaviour that is both complex and effective at attaining goals in a wide variety of environments and circumstances. However, it might be difficult or impossible for humans to figure out how it attains its goals, or even to discern exactly what those goals are. Wittgenstein’s enigmatic remark that ‘if a lion could talk we would not understand him’ comes to mind. But a lion is a relatively familiar creature, and we have little difficulty relating to many aspects of its life. A lion inhabits the same physical world we do, and it apprehends the world using a similar suite of senses. A lion eats, mates, sleeps and defecates. We have a lot in common. The hypothesised exotic AGI is altogether more alien.

The most exotic sort of entity would be one that was wholly inscrutable, which is to say it would be beyond the reach of anthropology. Human culture is, of course, enormously varied. Neighbours from the same village often have difficulty relating to each other’s habits, goals and preferences. Yet, through careful observation and interaction, anthropologists are able to make sense of this variety, rendering the practices of ‘exotic’ cultures – that is, very different from their own – comprehensible to them. But of course, we have even more in common with a fellow human being from a different culture than we do with a lion. Our shared humanity makes the anthropologist’s task tractable. The sort of inscrutable entity we are trying to imagine is altogether more exotic. Even if we were able to engineer an encounter with it and to discern seemingly purposeful behaviour, the most expert team of anthropologists would struggle to divine its purposes or how they are fulfilled.

How might such an entity come about? After all, if it were engineered by humans, why would it not be comprehensible to humans? Well, there are a number of ways that an AI might be created that wouldn’t be understood by its creators. We have already seen that AlphaGo is capable of taking both its programmers and its opponents by surprise. A more powerful general intelligence might find far more surprising ways to achieve its goals. More radically, an AI that was the product of artificial evolution or of self-modification might end up with goals very different from those intended by its programmers. Furthermore, since we are granting the possibility of multifarious extraterrestrial intelligences, the space of possible minds must include not only those beings, but also whatever forms of artificial intelligence they might build. Whatever grip we are capable of getting on the mind of a creature from another world, a world that could be very different from our own, our grip is likely to be more tenuous still for an evolved or self-modified AI whose seed is a system devised to serve that creatures’ already alien goals.

An exotic AI is clearly going to get a low score on the H-axis. But what about the C-axis? What might its capacity for consciousness be? Or, to put the matter differently, could we engineer an encounter with such a thing whereby, after sufficient observation and interaction, we would settle on our attitude towards it? If so, what would that attitude be? Would it be the sort of attitude we adopt towards a fellow conscious creature?

Well, now we have arrived at something of a philosophical impasse. Because the proffered definition of inscrutability puts the most exotic sort of AI beyond the reach of anthropology. And this seems to rule out the kind of encounter we require before we can settle on the right attitude towards it, at least according to a Wittgenstein-inspired, non-dualistic stance on subjectivity.

Is it possible to reconcile this view of consciousness with the existence of conscious exotica? Recall the white box thought experiment. Embedded in the mysterious box delivered to our laboratory, with its incomprehensibly complex but fully accessible internal dynamics, might be just the sort of inscrutable AI we are talking about. We might manage to engineer an encounter with the system, or some part of it, revealing seemingly purposeful behaviour, yet be unable to fathom just what that purpose was. An encounter with extraterrestrial intelligence would most likely present a similar predicament.

The novel Solaris (1961) by Stanislaw Lem offers a convincing fictional example. The novel’s protagonists are a crew of scientists orbiting a planet covered by an ocean that turns out to be a single, vast, intelligent organism. As they attempt to study this alien being, it seems to be probing them in turn. It does this by creating human-like avatars out of their memories and unconscious who visit them aboard their spacecraft with disturbing psychological effects. For their part, the scientists never get to grips with the alien mind of this organism: ‘Its undulating surface was capable of giving rise to the most diverse formations that bore no resemblance to anything terrestrial, on top of which the purpose – adaptive, cognitive, or whatever – of those often violent eruptions of plasmic “creativity” remained a total mystery.’

Suppose you were confronted by an exotic dynamical system such as the white box AI or the oceanic organism in Solaris. You want to know whether it is conscious or not. It’s natural to think that for any given being, whether living or artificial, there is an answer to this question, a fact of the matter, even if the answer is necessarily hidden from us, as it appears to be in these hypothetical cases. On the other hand, if we follow Wittgenstein’s approach to the issue, we go wrong when we think this way. Some facet of reality might be empirically inaccessible to us, but nothing is hidden as a matter of metaphysics.

Because these two standpoints are irreconcilable, our options at first appear to be just twofold. Either:

a) retain the concept of conscious exotica, but abandon Wittgenstein and acknowledge that there is a metaphysically separate realm of subjectivity. This would be a return to the dualism of mind and body and the hard problem/easy problem dichotomy;

or

b) retain a Wittgenstein-inspired approach to consciousness, insisting that ‘nothing is hidden’, but reject the very idea of conscious exotica. As a corollary, we would have to relinquish the project of mapping the space of possible minds onto the H-C plane.

However, there is a third option:

c) retain both the concept of conscious exotica and a Wittgenstein-inspired philosophical outlook by allowing that our language and practices could change in unforeseeable ways to accommodate encounters with exotic forms of intelligence.

We have been going along with the pretence that consciousness is a single, monolithic concept amenable to a scalar metric of capacity. This sort of manoeuvre is convenient in many branches of enquiry. For conservation purposes, an ecologist can usefully compress biodiversity into a single statistic, abstracting away from differences between species, seasonal changes, spatial distribution and so on. In economics, the ‘human development index’ usefully summarises aspects of a country’s education system, healthcare, productivity and the like, ignoring the numerous details of individual lives. However, for some purposes, a more nuanced approach is needed. Examined more closely, the concept of consciousness encompasses many things, including awareness of the world (or primary consciousness), self-awareness (or higher-order consciousness), the capacity for emotion and empathy, and cognitive integration (wherein the brain’s full resources are brought to bear on the ongoing situation).

Parts of our language to describe highly exotic entities with complex behaviour might be supplanted by wholly new ways of talking

In a normal, adult human being, these things come bundled together. But in a more exotic entity they might be disaggregated. In most non-human animals we find awareness of the world without self-awareness, and the capacity for suffering without the capacity for empathy. A human-level AI might display awareness of the world and self-awareness without the capacity for emotion or empathy. If such entities became familiar, our language would change to accommodate them. Monolithic concepts such as consciousness might break apart, leading to new ways of talking about the behaviour of AIs.

More radically, we might discover whole new categories of behaviour or cognition that are loosely associated with our old conception of consciousness. In short, while we might retain bits of today’s language to describe highly exotic entities with complex behaviour, other relevant parts of our language might be reshaped, augmented or supplanted by wholly new ways of talking, a process that would be informed by computer science, by comparative cognition, by behavioural psychology and by the natural evolution of ordinary language. Under these conditions, something like ‘capacity for consciousness’ might be usefully retained as a summary statistic for those entities whose behaviour eludes explanation in today’s terms but could be accommodated by a novel conceptual framework wherein the notion of consciousness now familiar to us, though fragmented and refracted, remains discernible.

What are the implications of this possibility for the H-C plane? Figure 2 above indicates a point on the H-C plane with the same value as a human on the C-axis, but which is exotic enough to lie on the H-axis at the limit of applicability of any form of consciousness. Here we find entities, both extraterrestrial and artificial, that possess human-level intelligence but whose behaviour bears little resemblance to human behaviour.

Nevertheless, given sufficiently rich interaction with and/or observation of these entities, we would come to see them as fellow conscious beings, albeit having modified our language to accommodate their eccentricities. One such entity, the exotic conscious AGI, has a counterpart at the left-hand end of the H-C plane, namely the exotic zombie AGI. This is a human-level AI whose behaviour is similarly non-human-like, but that we are unable to see as conscious however much we interact with it or observe it. These two data points – the exotic, conscious, human-level intelligence and the exotic, zombie, human-level intelligence – define the bottom two corners of a square whose other corners are humans themselves at the top right, and human-like zombies at the top left. This square illustrates the inextricable, three-way relationship between human-level intelligence, human-likeness, and consciousness.

We can now identify a number of overlapping regions within our embryonic space of possible minds. These are depicted in Figure 3 below. On the (debatable) assumption that, if an entity is conscious at all, its capacity for consciousness will correlate with its cognitive prowess, human-level intelligence features in the two, parallel purple regions, one to the far left of the diagram and one at human level on the C-axis. The exotic, conscious AGI resides at the bottom of the latter region, and also at the left-hand end of the orange region of conscious exotica. This region stretches to the right of the C-axis, beyond the human level, because it encompasses exotic beings, which could be extraterrestrial or artificial or both, with superhuman intelligence and a superhuman capacity for consciousness. Our ‘mind children’ are less exotic forms of possible superintelligent, superconscious creatures. But the conscious exotica here are, perhaps, the most interesting beings in the space of possible minds, since they reside at the limit of what we would welcome into the fellowship of consciousness, yet sit on a boundary beyond which everything complex is inscrutably strange.



Figure 3. Notable regions of the H-C plane

This boundary marks the edge of a region that is empty, denoted the ‘Void of Inscrutability’. It is empty because, as Wittgenstein remarks, we say only of a human being and what behaves like one that it is conscious. We have stretched the notion of what behaves like a human being to breaking point (perhaps further than Wittgenstein would find comfortable). As we approach that breaking point, I have suggested, today’s language of consciousness begins to come apart. Beyond that point we find only entities for which our current language has no application. Insofar as they exhibit complex behaviour, we are obliged to use other terms to describe and explain it, so these entities are no further along the C-axis than the brick. So the lowest strip of the diagram has no data points at all. It does not contain entities that are inscrutable but who might – for all we know – be conscious. To think this would be to suppose that there are facts of the matter about the subjectivity of inscrutably exotic entities that are forever closed off to us. We can avoid the dualism this view entails by accepting that this region is simply a void.

The void of inscrutability completes my provisional sketch of the space of possible minds. But what have we gained from this rather fanciful exercise? The likelihood of humans directly encountering extraterrestrial intelligence is small. The chances of discovering a space-borne signal from another intelligent species, though perhaps greater, are still slight. But artificial intelligence is another matter. We might well create autonomous, human-level artificial intelligence in the next few decades. If this happens, the question of whether, and in what sense, our creations are conscious will become morally significant. But even if none of these science-fiction scenarios comes about, to situate human consciousness within a larger space of possibilities strikes me as one of the most profound philosophical projects we can undertake. It is also a neglected one. With no giants upon whose shoulders to stand, the best we can do is cast a few flares into the darkness.
https://aeon.co/essays/beyond-humans-what-other-kinds-of-minds-might-be-out-there
"Quem for brasileiro, siga-me." Duque de Caxias

"Vamos mudar isso aí. Tá OK?" Capitão Mito Bolsonaro

Offline Buckaroo Banzai

  • Nível Máximo
  • *
  • Mensagens: 36.028
  • Sexo: Masculino
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1321 Online: 05 de Fevereiro de 2018, 15:53:19 »
No youtube, pesquisando pelos nomes dos autores, tem uma porção de resultados relacionados.

E tem um canal dessa publicação, Aeon.co.

https://www.youtube.com/user/aeonmagazine

Parece um tipo de Vox mais nerd e sem todas as tramóias globalistas para dominação global do George Soros por trás.

<a href="https://www.youtube.com/v/1ur7eIKiwuA" target="_blank" class="new_win">https://www.youtube.com/v/1ur7eIKiwuA</a>

Offline Gigaview

  • Nível Máximo
  • *
  • Mensagens: 13.790
  • "Minha espada não tem partidos."
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1322 Online: 24 de Abril de 2018, 20:32:55 »
Um ótimo argumento contra a idéia de  "alma"/"espírito" dotados de personalidade.

Citar
Can a brain injury change who you are?
Leanne Rowlands -PhD researcher in Neuropsychology, Bangor University
April 20, 2018 11.03am BST

Who we are, and what makes us “us” has been the topic of much debate throughout history. At the individual level, the ingredients for the unique essence of a person consist mostly of personality concepts. Things like kindness, warmth, hostility and selfishness. Deeper than this, however, is how we react to the world around us, respond socially, our moral reasoning, and ability to manage emotions and behaviours.

Philosophers, including Plato and Descartes, attributed these experiences to non-physical entities, quite separate to the brain. “Souls”, they describe, are where human experiences take place. According to this belief, souls house our personalities, and enable moral reasoning to occur. This idea still enjoys substantial support today. Many are comforted by the thought that the soul does not need the brain, and mental life can continue after death.

If who we are is attributed to a non-physical substance independent of the brain, then physical damage to this organ should not change a person. But there is an overwhelming amount of neuropsychological evidence to suggest that this is, in fact, not only possible, but relatively common.

The perfect place to start explaining this is the curious case of Phineas Gage.

Citar

Phineas Gage, after injury. Originally from the collection of Jack and Beverly Wilgus, and now in the Warren Anatomical Museum, Harvard Medical School.

In 1848, 25-year-old Gage was working as a construction foreman for a railroad company. During the works, explosives were required to blast away rock. This intricate procedure involved explosive powder and a tamping iron rod. In a moment of distraction, Gage detonated the powder and the charge went off, sending the rod through his left cheek. It pierced his skull, and travelled through the front of his brain, exiting the top of his head at high speed. Modern day methods have since revealed that the likely site of damage was to parts of his prefrontal cortex.

Gage was thrown to the floor, stunned, but conscious. His body eventually recovered well, but Gage’s behavioural changes were extraordinary. Previously a well-mannered, respectable, smart business man, Gage reportedly became irresponsible, rude and aggressive. He was careless and unable to make good decisions. Women were advised not to stay long in his company, and his friends barely recognised him.

A similar case was that of photographer and forerunner of motion pictures Eadweard Muybridge. In 1860, Muybridge was involved in a stagecoach accident and sustained a brain injury to the orbitofrontal cortex (part of the prefrontal cortex). He had no recollection of the crash, and developed traits that were quite unlike his former self. He became aggressive, emotionally unstable, impulsive and possessive. In 1874, upon discovering his wife’s infidelity, he shot and killed the man involved. His attorney pled insanity, due to the extent of the personality changes following the accident. Sworn testimonies emphasised that “he seemed like a different man”.

Perhaps an even more controversial example is that of a 40-year-old school teacher who, in the year 2000, developed a strong interest in pornography, particularly child pornography. The patient went to great lengths to conceal this interest, which he acknowledged was unacceptable. But unable to refrain from his urges, he continued to act on his sexual impulses. When he began making sexual advances towards his young stepdaughter, he was legally removed from the home and diagnosed with paedophilia. Later, it was discovered that he had a brain tumour displacing part of his orbitofrontal cortex, disrupting its function. The symptoms resolved with the removal of the tumour.

Different personalities

Citar

Orbitofrontal cortex location. Wikimedia/Paul Wicks

All these cases have one thing in common: damage to areas of the prefrontal cortex, in particular the orbitofrontal cortex. Although they may be extreme examples, the idea that damage to these parts of the brain results in severe personality changes is now well-established. The prefrontal cortex has a role in managing behaviours, regulating emotions and responding appropriately. So it makes sense that disinhibited and inappropriate behaviour, psychopathy, criminal behaviour, and impulsivity have all been linked to damage of this area.

However, changes after injury can be more subtle than those previously described. Consider the case of Mr. L, who suffered a severe traumatic brain injury after falling off a roof while supervising a building construction. His later aggressive behaviour and delusional jealousy about his wife’s apparent infidelity caused a breakdown in their relationship. To her, he was not the same man anymore.

Difficulties with emotion management like this are not only distressing, but are predictive of lower psychological adjustment, negative social changes and greater caregiver distress. Many brain injury survivors also suffer with depression, anxiety and social isolation, while struggling to adjust to post-injury life.

But with a growing appreciation of the relevance of emotional adjustment in rehabilitation, treatments have been developed to help manage these changes. In our lab, we have developed the BISEP (Brain Injury Solutions and Emotions Programme), which is a cost-effective, education-based, group therapy. This addresses several common complaints of brain injury survivors and has a strong emphasis on emotion regulation. It teaches attendees strategies that can be used adaptively and independently, to help manage their emotions and associated behaviours. Although it is early days, we have obtained some positive preliminary results.

From a neuropsychological perspective, it’s clear that who we are is dependent on the brain, and not the soul. Damage to the prefrontal cortex can change who we are, and though people have become unrecognisable from it in the past, new strategies will make a big difference to their lives. It may be too late for Gage, Muybridge and others, but brain injury survivors of the future will have the help they need to go back to living their lives as they did before.
https://theconversation.com/amp/can-a-brain-injury-change-who-you-are-95081
"Quem for brasileiro, siga-me." Duque de Caxias

"Vamos mudar isso aí. Tá OK?" Capitão Mito Bolsonaro

Offline Sdelareza

  • Nível 14
  • *
  • Mensagens: 347
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1323 Online: 24 de Abril de 2018, 21:21:44 »
No caso do Phineas Gage, de fato houve a curto prazo uma alteração da sua personalidade depois
do acidente com a barra que lhe atravessou o crânio. Ele tornou-se "irresponsável, rude e agressivo" e
largou seu emprego de operário.

Mas o que o texto acima não menciona é que posteriormente ele aparentemente voltaria a ser uma
pessoa normal, pois voltou a ocupar empregos que exigiam responsabilidade (parece que cuidava de cavalos
 e foi condutor de diligências). Coisa que não seria possível com os sintomas apresentados logo depois do acidente.

Isso é resultado da plasticidade do cérebro, que consegue restabelecer conexões neurais danificadas
com as células nervosas restantes (claro, células nervosas danificadas não podem ser substituídas).

Será que aquilo permitiu à alma anterior ditar novamente os comportamentos? Nem sabia que há espíritas que
defendem que almas ou espíritos são dotados de personalidade.
Bom, pra ser sincero, nunca estudei espiritismo.
« Última modificação: 24 de Abril de 2018, 21:46:21 por Sdelareza »

Offline Gorducho

  • Nível 26
  • *
  • Mensagens: 1.233
  • Sexo: Masculino
Re:Uma visão cética (sem bullshits filosóficos) da consciência
« Resposta #1324 Online: 25 de Abril de 2018, 08:40:32 »
Nem sabia que há espíritas que defendem que almas ou espíritos são dotados de personalidade.
Bom, pra ser sincero, nunca estudei espiritismo.
TODOS espíritas defendem que a personalidade está no "espírito".
  :D
Pelo menos os "espiritismos" main: santerías, anglo, kardecismo, chiquismo, filipino...
Claro: não funciona, como mostrado acima.
Mas é a essência da crença espírita.
« Última modificação: 25 de Abril de 2018, 08:43:54 por Gorducho »

 

Do NOT follow this link or you will be banned from the site!