Archive for September, 2009

Image based CAPTCHAs

September 17, 2009 Leave a comment

Machines can’t replicate human image recognition — yet

Wednesday, September 9, 2009

University Park, Pa. — While computers can replicate many aspects of human behavior, they do not possess our ability to recognize distorted images, according to a team of Penn State researchers.

“Our goal is to seek a better understanding of the fundamental differences between humans and machines and utilize this in developing automated methods for distinguishing humans and robotic programs,” said James Z. Wang, associate professor in Penn State’s College of Information Sciences and Technology.

Wang, along with Ritendra Datta, a Penn State doctoral recipient, and Jia Li, associate professor of statistics at Penn State, explored the difference in human and machine recognition of visual concepts under various image distortions.

The researchers used those differences to design image-based CAPTCHAs (Completely Automated Public Turing Test to Tell Computers and Humans Apart), visual devices used to prevent automated network attacks.

Many e-commerce Web sites use CAPTCHAs, which are randomly generated sets of words that a user types in a box provided in order to complete a registration or purchasing process. This is done to verify that the user is human and not a robotic program.

In Wang’s study, a demonstration program with an image-based CAPTCHA called IMAGINATION was presented on Both humans and robotic programs were observed using the CAPTCHA.

Although the scope of the human users was limited, the results, presented in the September issue of IEEE Transactions on Information Forensics and Security, proved that robotic programs were not able to recognize distorted images. In other words, a computer recognition program had to rely on an accurate picture, while humans were able to tell what the picture was even though it was distorted.

Wang said he hopes to work with developers in the future to make IMAGINATION a CAPTCHA program that Web sites can use to strengthen the prevention of automated network attacks.

Even though machine recognizability does not exceed human recognizability at this time, Wang said that there is a possibility that it will in the future.

“We are seeing more intelligently designed computer programs that can harness a large volume of online data, much more than a typical human can experience in a lifetime, for knowledge generation and automatic recognition,” said Wang. “If certain obstacles, which many believe to be insurmountable, such as scalability and image representation, can be overcome, it is possible that one day machine recognizability can reach that of humans.”

Categories: News Stories

Digital Contacts -> Augmented Reality

September 12, 2009 Leave a comment

Original Article from, posted on

Digital contacts will keep an eye on your vital signs

  • Story Highlights
  • Scientists developing contact lens with built-in LED, powered by radio waves
  • More advanced lens could provide scrolling captions beneath what you see
  • Surface of the eye contains enough data to perform personal health monitoring
  • Lens must undergo more testing before gaining approval from FDA

updated 9:53 a.m. EDT, Fri September 11, 2009
By Brian X. Chen

Forget about 20/20. “Perfect” vision could be redefined by gadgets that give you the eyes of a cyborg.

The surface of the eye can be used to measure much of the same data you would get from blood tests.

The surface of the eye can be used to measure much of the same data you would get from blood tests.

The tech industry calls the digital enrichment of the physical world “augmented reality.” Such technology is already appearing in smartphones and toys, and enthusiasts dream of a pair of glasses we could don to enhance our everyday perception. But why stop there?

Scientists, eye surgeons, professors and students at the University of Washington have been developing a contact lens containing one built-in LED, powered wirelessly with radio frequency waves.

Eventually, more advanced versions of the lens could be used to provide a wealth of information, such as virtual captions scrolling beneath every person or object you see. Significantly, it could also be used to monitor your own vital signs, such as body temperature and blood glucose level.

Why a contact lens? The surface of the eye contains enough data about the body to perform personal health monitoring, according to Babak Parvis, a University of Washington professor of bionanotechnology, who is working on the project.

“The eye is our little door into the body,” Parvis told

With gadgets becoming increasingly mobile and powerful, the technology industry is seeing a steady stream of applications devoted to health. A few examples include a cellphone microscope used to diagnose malaria, surgeons honing their skills with the Nintendo Wiimote, and an iPhone app made for diabetes patients to track their glucose levels.

A contact lens with augmented-reality powers would take personal health monitoring several steps further, Parvis said, because the surface of the eye can be used to measure much of the data you would read from your blood tests, including cholesterol, sodium, potassium and glucose levels.

And that’s just the beginning. Because this sort of real-time health monitoring has been impossible in the past, there’s likely more about the human eye we haven’t yet discovered, Parvis said.

And beyond personal health monitoring, this finger-tip sized gadget could one day create a new interface for gaming, social networking and, well, interacting with reality in general.

Parvis and his colleagues have been working on their multipurpose lens since 2004. They integrated miniature antennas, control circuits, an LED and radio chips into the lens using optoelectronic components they built from scratch.

They hope these components will eventually include hundreds of LEDs to display images in front of the eye. Think words, charts and even photographs.

Sounds neat, doesn’t it? But the group faces a number of challenges before achieving true augmented eye vision.

First and foremost, safety is a prime concern with a device that comes in contact with the eye. To ensure the lens is safe to wear, the group has been testing prototypes on live rabbits, who have successfully worn the lenses for 20 minutes at a time with no adverse effects.

However, the lens must undergo much more testing before gaining approval from the Food and Drug Administration.

A fundamental challenge this contact lens will face is the task of tracking the human eye, said Blair MacIntyre, an associate professor and director of the augmented environments lab at Georgia Tech College of Computing. MacIntyre is not involved in the contact lens product, but he helped develop an augmented-reality zombie shooter game.

“These developments are obviously very far from being usable, but very exciting,” MacIntyre said. “Using them for AR will be very hard. You need to know exactly where the user is looking if you want to render graphics that line up with the world, especially when their eyes saccade (jump around), which our eyes do at a very high rate.”

Given that obstacle, we’re more likely to see wearable augmented-reality eyeware in the form of glasses before a contact lens, MacIntyre said. With glasses, we’ll only need to track where the glasses are and where the eyes are relative to them as opposed to where the eyes are actually looking.

And with a contact lens, it will be difficult to cram heavy computational power into such a small device, even with today’s state-of-the-art technologies, Parvis admits.

There are many advanced sensors that would amplify the lens’ abilities, but the difficulty lies in integrating them, which is why Parvis and his colleagues have had to engineer their own components. And when the contact lens evolves from personal health monitoring into more processor-intense augmented-reality applications, it’s more likely it will have to draw its powers from a companion device such as a smartphone, he said.

Layar, an Amsterdam-based startup focusing on augmented reality, shares University of Washington’s vision of an augmented-reality contact lens. However, Raimo van der Klein, CEO of Layar, said such a device’s vision would be limited if it did not work with an open platform supporting every type of data available via the web, such as mapping information, restaurant reviews or even Twitter feeds.

Hence, his company has taken a first step by releasing an augmented-reality browser for Google Android smartphones, for which software developers can provide “layers” of data for various web services.

Van der Klein believes a consumer-oriented, multipurpose lens is just one example of where augmented-reality technology will take form in the near future. He said to expect these applications to move beyond augmenting vision and expand to other parts of the body.

“Imagine audio cues through an earpiece or sneakers vibrating wherever your friends are,” van der Klein said. “We need to keep an open eye for future possibilities, and I think a contact lens is just part of it.”

Categories: News Stories

Blindsight article

September 12, 2009 Leave a comment

From NYTimes, Dec 2008

Blind, Yet Seeing: The Brain’s Subconscious Visual Sense

William Duke

BLINDSIGHT A patient whose visual lobes in the brain were destroyed was able to navigate an obstacle course and recognize fearful faces subconsciously.

Published: December 22, 2008

The man, a doctor left blind by two successive strokes, refused to take part in the experiment. He could not see anything, he said, and had no interest in navigating an obstacle course — a cluttered hallway — for the benefit of science. Why bother?

When he finally tried it, though, something remarkable happened. He zigzagged down the hall, sidestepping a garbage can, a tripod, a stack of paper and several boxes as if he could see everything clearly. A researcher shadowed him in case he stumbled.

“You just had to see it to believe it,” said Beatrice de Gelder, a neuroscientist at Harvard and Tilburg University in the Netherlands, who with an international team of brain researchers reported on the patient on Monday in the journal Current Biology. A video is online at

The study, which included extensive brain imaging, is the most dramatic demonstration to date of so-called blindsight, the native ability to sense things using the brain’s primitive, subcortical — and entirely subconscious — visual system.

Scientists have previously reported cases of blindsight in people with partial damage to their visual lobes. The new report is the first to show it in a person whose visual lobes — one in each hemisphere, under the skull at the back of the head — were completely destroyed. The finding suggests that people with similar injuries may be able to recover some crude visual sense with practice.

“It’s a very rigorously done report and the first demonstration of this in someone with apparent total absence of a striate cortex, the visual processing region,” said Dr. Richard Held, an emeritus professor of cognitive and brain science at the Massachusetts Institute of Technology, who with Ernst Pöppel and Douglas Frost wrote the first published account of blindsight in a person, in 1973.

The man in the new study, an African living in Switzerland at the time, suffered the two strokes in his 50s, weeks apart, and was profoundly blind by any of the usual measures. Unlike people suffering from eye injuries, or congenital blindness in which the visual system develops abnormally, his brain was otherwise healthy, as were his eyes, so he had the necessary tools to process subconscious vision. What he lacked were the circuits that cobble together a clear, conscious picture.

The research team took brain scans and magnetic resonance images to see the damage, finding no evidence of visual activity in the cortex. They also found no evidence that the patient was navigating by echolocation, the way that bats do. Both the patient, T. N., and the researcher shadowing him walked the course in silence.

The man himself was as dumbfounded as anyone that he was able to navigate the obstacle course.

“The more educated people are,” Dr. de Gelder said, “in my experience, the less likely they are to believe they have these resources that they are not aware of to avoid obstacles. And this was a very educated person.”

Scientists have long known that the brain digests what comes through the eyes using two sets of circuits. Cells in the retina project not only to the visual cortex — the destroyed regions in this man — but also to subcortical areas, which in T. N. were intact. These include the superior colliculus, which is crucial in eye movements and may have other sensory functions; and, probably, circuits running through the amygdala, which registers emotion.

In an earlier experiment, one of the authors of the new paper, Dr. Alan Pegna of Geneva University Hospitals, found that the same African doctor had emotional blindsight. When presented with images of fearful faces, he cringed subconsciously in the same way that almost everyone does, even though he could not consciously see the faces. The subcortical, primitive visual system apparently registers not only solid objects but also strong social signals.

Dr. Held, the M.I.T. neuroscientist, said that in lower mammals these midbrain systems appeared to play a much larger role in perception. In a study of rats published in the journal Science last Friday, researchers demonstrated that cells deep in the brain were in fact specialized to register certain qualities of the environment.

They include place cells, which fire when an animal passes a certain landmark, and head-direction cells, which track which way the face is pointing. But the new study also found strong evidence of what the scientists, from the Norwegian University of Science and Technology in Trondheim, called “border cells,” which fire when an animal is close to a wall or boundary of some kind.

All of these types of neurons, which exist in some form in humans, may too have assisted T. N. in his navigation of the obstacle course.

In time, and with practice, people with brain injuries may learn to lean more heavily on such subconscious or semiconscious systems, and perhaps even begin to construct some conscious vision from them.

“It’s not clear how sharp it would be,” Dr. Held said. “Probably a vague, low-resolution spatial sense. But it might allow them to move around more independently.”

Categories: News Stories

Trust and wiki

September 4, 2009 Leave a comment

From Technology Review, 9/4/2009

Adding Trust to Wikipedia, and Beyond
Tracing information back to its source could help prove trustworthiness.

By Erica Naone
The official motto of the Internet could be “don’t believe everything you read,” but moves are afoot to help users know better what to be skeptical about and what to trust.
A tool called WikiTrust, which helps users evaluate information on Wikipedia by automatically assigning a reliability color-coding to text, came into the spotlight this week with news that it could be added as an option for general users of Wikipedia. Also, last week the Wikimedia Foundation announced that changes made to pages about living people will soon need to be vetted by an established editor. These moves reflect a broader drive to make online information more accountable. And this week the World Wide Web Consortium published a framework that could help any Web site make verifiable claims about authorship and reliability of content.
WikiTrust, developed by researchers at the University of California, Santa Cruz, color-codes the information on a Wikipedia page using algorithms that evaluate the reliability of the author and the information itself. The algorithms do this by examining how well-received the author’s contributions have been within the community. It looks at how quickly a user’s edits are revised or reverted and considers the reputation of those people who interact with the author. If a disreputable editor changes something, the original author won’t necessarily lose many reputation points. A white background, for example, means that a piece of text has been viewed by many editors who did not change it and that it was written by a reliable author. Shades of orange signify doubt, dubious authorship, or ongoing controversy.
Luca de Alfaro, an associate professor of computer science at the UC Santa Cruz who helped develop WikiTrust, says that most Web users crave more accountability. “Fundamentally, we want to know who did what,” he says. According to de Alfaro, WikiTrust makes it harder to change information on a page without anyone noticing, and it makes it easy to see what’s happening on a page and analyze it.
The researchers behind WikiTrust are working on a version that includes a full analysis of all the edits made to the English-language version of Wikipedia since its inception. A demo of the full version will be released within the next couple months, de Alfaro says, though it’s still uncertain whether that will be hosted on the university’s own servers or by the Wikimedia Foundation. The principles used by WikiTrust’s algorithms could be brought onto any site with collaboratively created content, de Alfaro adds.
Creating a common language for building trust online is the goal of the Protocol for Web Description Resources (POWDER), released this week by the World Wide Web Consortium.
Powder takes a simpler approach than WikiTrust. By using Powder’s specifications, a Web site can make claims about where information came from and how it can be used. For example, a site could say that a page contains medical information provided by specific experts. It could also assure users that certain sites will work on mobile devices, or that content is offered through a Creative Commons license.
Powder is designed to integrate with third-party authentication services and to be machine-readable. Users could install a plug-in that would look for claims made through Powder on any given page, automatically check their authentication, and inform other users of the result. Search engines could also read descriptions made using Powder, allowing them to help users locate the most trustworthy and relevant information.
“From the outset, a fundamental aspect of Powder is that, if the document is to be valid, it must point to the author of that document,” says Phil Archer, a project manager for i-sieve technologies who is involved with the Powder working group. “We strongly encourage authors to make available some sort of authentication mechanism.”
Ed Chi, a senior research scientist at the Palo Alto Research Center, believes that educating users about online trust evaluation tools could be a major hurdle. “So far, human-computer interaction research seems to suggest that people are willing to do very little [to determine the trustworthiness of websites]–in fact, nothing,” he says. As an example, Chi notes the small progress that has been made in teaching users to avoiding phishing scams or to make sure that they enter credit-card information only on sites that encrypt data. “The general state of affairs is pretty depressing,” he says.
Even if Web users do learn to use new tools to evaluate the trustworthiness of information, most experts agree that this is unlikely to solve the problem completely. “Trust is a very human thing,” Archer says. “[Technology] can never, I don’t think, give you an absolute guarantee that what is on your screen can be trusted at face value.”
Copyright Technology Review 2009.image001

Color me trustworthy: WikiTrust codes Wikipedia pages according to contributors’ reputations and how the content has changed over time.
Credit: University of California, Santa Cruz

Categories: News Stories

Singularity article

September 4, 2009 Leave a comment

From Technology Review, 9/4/2009

The Singularity and the Fixed Point
The importance of engineering motivation into intelligence.
By Edward Boyden
Some futurists such as Ray Kurzweil have hypothesized that we will someday soon pass through a singularity–that is, a time period of rapid technological change beyond which we cannot envision the future of society. Most visions of this singularity focus on the creation of machines intelligent enough to devise machines even more intelligent than themselves, and so forth recursively, thus launching a positive feedback loop of intelligence amplification. It’s an intriguing thought. (One of the first things I wanted to do when I got to MIT as an undergraduate was to build a robot scientist that could make discoveries faster and better than anyone else.) Even the CTO of Intel, Justin Rattner, has publicly speculated recently that we’re well on our way to this singularity, and conferences like the Singularity Summit (at which I’ll be speaking in October) are exploring how such transformations might take place.
As a brain engineer, however, I think that focusing solely on intelligence augmentation as the driver of the future is leaving out a critical part of the analysis–namely, the changes in motivation that might arise as intelligence amplifies. Call it the need for “machine leadership skills” or “machine philosophy”–without it, such a feedback loop might quickly sputter out.
We all know that intelligence, as commonly defined, isn’t enough to impact the world all by itself. The ability to pursue a goal doggedly against obstacles, ignoring the grimness of reality (sometimes even to the point of delusion–i.e., against intelligence), is also important. Most science-fiction stories prefer their artificial intelligences to be extremely motivated to do things–for example, enslaving or wiping out humans, if The Matrix and Terminator II have anything to say on the topic. But I find just as plausible the robot Marvin, the superintelligent machine from Douglas Adams’ The Hitchhiker’s Guide to the Galaxy, who used his enormous intelligence chiefly to sit around and complain, in the absence of any big goal.
Indeed, a really advanced intelligence, improperly motivated, might realize the impermanence of all things, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence, concluding that inventing an even smarter machine is pointless. (A corollary of this thinking might explain why we haven’t found extraterrestrial life yet: intelligences on the cusp of achieving interstellar travel might be prone to thinking that with the galaxies boiling away in just 1019 years, it might be better just to stay home and watch TV.) Thus, if one is trying to build an intelligent machine capable of devising more intelligent machines, it is important to find a way to build in not only motivation, but motivation amplification–the continued desire to build in self-sustaining motivation, as intelligence amplifies. If such motivation is to be possessed by future generations of intelligence–meta-motivation, as it were–then it’s important to discover these principles now.
There’s a second issue. An intelligent being may be able to envision many more possibilities than a less intelligent one, but that may not always lead to more effective action, especially if some possibilities distract the intelligence from the original goals (e.g., the goal of building a more intelligent intelligence). The inherent uncertainty of the universe may also overwhelm, or render irrelevant, the decision-making process of this intelligence. Indeed, for a very high-dimensional space of possibilities (with the axes representing different parameters of the action to be taken), it might be very hard to evaluate which path is the best. The mind can make plans in parallel, but actions are ultimately unitary, and given finite accessible resources, effective actions will often be sparse.
The last two paragraphs apply not only to AI and ET, but also describe features of the human mind that affect decision making in many of us at times–lack of motivation and drive, and paralysis of decision making in the face of too many possible choices. But it gets worse: we know that a motivation can be hijacked by options that simulate the satisfaction that the motivation is aimed toward. Substance addictions plague tens of millions of people in the United States alone, and addictions to more subtle things, including certain kinds of information (such as e-mail), are prominent too. And few arts are more challenging than passing on motivation to the next generation, for the pursuit of a big idea. Intelligences that invent more and more interesting and absorbing technologies, that can better grab and hold their attention, while reducing impact on the world, might enter the opposite of a singularity.
What is the opposite of a singularity? The singularity depends on a mathematical recursion: invent a superintelligence, and then it will invent an even more powerful superintelligence. But as any mathematics student knows, there are other outcomes of an iterated process, such as a fixed point. A fixed point is a point that, when a function is applied, gives you the same point again. Applying such a function to points near the fixed point will often send them toward the fixed point.
A “societal fixed point” might therefore be defined as a state that self-reinforces, remaining in the status quo–which could in principle be peaceful and self-sustaining, but could also be extremely boring–say, involving lots of people plugged into the Internet watching videos forever. Thus, we as humans might want, sometime soon, to start laying out design rules for technologies so that they will motivate us to some high goal or end–or at least away from dead-end societal fixed points. This process will involve thinking about how technology could help confront an old question of philosophy–namely, “What should I do, given all these possible paths?” Perhaps it is time for an empirical answer to this question, derived from the properties of our brains and the universe we live in.

Categories: News Stories

Shift change facilitation

September 4, 2009 Leave a comment

From NYTimes, 9/3/2009
Doctor and Patient
When Patient Handoffs Go Terribly Wrong
I have always felt uneasy about patient handoffs, transferring my responsibility as a doctor to another physician. We cannot be on duty all the time, but I worry that I am playing some real-life medical version of the children’s game “Telephone” where the complexity of my patient’s care will be watered down, misinterpreted and possibly mangled with each re-telling. I wonder, too, if it is only a matter of time before the kind of mistake that happened to Joey (not his real name) might happen to one of my patients.
Two-year-old Joey had been healthy since birth. But a few weeks before I met him, his mother noticed that the left side of his face had started to swell. By the time he appeared in clinic, it looked as if a ping pong ball had been permanently lodged in his cheek.
Despite the senior surgeon’s years of experience, removing the mass from Joey’s cheek proved to be a challenge. It had insinuated itself into every possible crevice; and the nerve that innervated the muscles of his mouth and cheek — the nerve of facial expression — was embedded deep within.
The senior surgeon spent hours daintily picking away at the mass, sorting through strands of fibrous connective tissue, many of them neuronal doppelgangers, in order not to injure the buried nerve. After being nibbled at with surgical instruments for hours, the toddler’s flayed cheek looked more like a puppy’s well-worn chew toy than any recognizable set of anatomical structures. When the surgeon had at last cleaned the strand he believed was the nerve, he looped a slender yellow rubber tie around it. Then, without warning, the surgeon put down his instruments and looked up at the clock. He barked at the nurse to call for one of his colleagues, then stepped away from the table and ripped off his surgical mask and gown. “You take over,” he said when his colleague came into the room. “It’s mostly out, but I need to leave.” None of us knew if he had to attend to another urgent patient matter in the hospital or how long he might be gone.
The covering surgeon stepped up to the table, poked his finger around the remnants of the mass, then pulled on the rubber tie and the presumptive nerve. “What’s this?” he asked, reaching for a pair of scissors.
Without waiting for a response, he snipped the strand in two.
That night, I hovered outside of Joey’s room, waiting for him to wake up, laugh, cry or simply move his mouth. But it wasn’t until two days later, after we had removed all the gauze covering his incision, that I saw what I had feared I would. Joey grinned, but his left cheek remained frozen. His once symmetrical smile had been transformed into a contorted grimace.
Years later I am still haunted by the memory of Joey and that handoff which went terribly wrong. I don’t know what caused the first surgeon to suddenly leave. And because the operation was so difficult and the field of view so small, I’m not sure if the nerve might have been damaged or transected even before the second surgeon stepped in. But I do know that the surgeons never communicated clearly about what had been done when they traded places at the table. And I also know that transitions between physicians are now, more than ever, a routine and frequent part of health care.
Like many others among my professional peers, I find myself signing out and my patients being handed off more than I ever thought would happen. While older patients with multiple chronic conditions will see up to 16 doctors a year, some of the healthiest younger patients I see count not only a primary care physician among their doctors but also a handful of specialists. Hospitalized patients, no longer cared for by their primary care doctors but by teams of fully trained doctors, or hospitalists, in addition to groups of doctors-in-training, are passed between doctors an average of 15 times during a single five-day hospitalization. And young doctors, with increasing time pressures from work hours reforms, will sign over as many as 300 patients in a single month during their first year of training.
While these changes have led to improvements in certain aspects of quality of care and better rested physicians, it has also resulted in frank fragmentation. It’s hardly surprising, then, that according to two recent studies, the vast majority of hospitalized patients are unable to name their doctor, and an equally large percentage of their discharge summaries have no mention of tests and studies that are pending.
Over the last decade, medical researchers and educators turned their attention increasingly to this issue. I spoke recently to Dr. Vineet M. Arora, an assistant professor of medicine at the University of Chicago, who studies patient handoffs and the ways in which they might be improved.
Handoffs are supposed to mitigate any issues that arise when doctors pass the responsibility for patient care to a colleague. “But that requires investing time and effort,” Dr. Arora said, “and using handoffs as an opportunity to come together to see how patient care can be made safer.”
Most of the time, however, handoffs are fraught with misunderstanding and miscommunication. Physicians who are signing out may inadvertently omit information, such as the rationale for a certain antibiotic or a key piece of the patient’s surgical history. And doctors who are receiving the information may not assume the same level of responsibility for the care of that patient. “Handoffs are a two-way process,” Dr. Arora observed. “It’s a complex interplay.” Missed opportunities to impart important patient information result in more uncertainty for the incoming doctor. That uncertainty leads to indecision which can ultimately result in significant delays during critical medical decisions.
More recently, Dr. Arora pointed out, researchers have begun looking for new ways to approach patient handoffs, studying other high-stakes shift-oriented industries like aviation, transportation and nuclear power, as well as other groups of clinicians.
“We can borrow from the models of other health care practitioners,” Dr. Arora said. Nurses, for example, have long placed great importance on the process of handing off patients. “It’s pretty difficult to find and interrupt a nurse during shift change because they have made it a high priority,” Dr. Arora remarked. “There’s a dedicated time, a dedicated room, a culture that has developed around it. In contrast, physicians have historically emphasized continuity much more than handoffs. As a result, doctors’ signouts happen quickly, last-minute and on the fly.”
By incorporating more efficient methods of communication, the hope is that patient care transitions will eventually become seamless and less subject to errors. But even more important than teaching and learning those methods, Dr. Arora says, will be transforming physician attitudes.
“It’s critical that we invest the time and that our payment system eventually reflects how important that time is,” Dr. Arora said. “But we also need to change our profession’s thinking so that handoffs are a priority and not an afterthought. We need to be able to say that the ability to transition care well is an important metric by which you will be judged to be a good doctor.
“Good handoffs are about best practices, about being a good doctor. Investing time in them is the right thing to do.”

Categories: News Stories

Seizure makes woman think she is a man

September 3, 2009 Leave a comment

From Fox News

Seizure Makes Woman Mistake Herself for a Man

Thursday , September 03, 2009

For the first time, scientists report an instance of a brain seizure making someone believe they underwent a sex transformation.

The case in question involved a 37-year-old woman admitted to an epilepsy center in Germany for seizures in 2006. In addition to instances of nausea, fear, and sometime déjà vu several times a week, she reported occasionally also perceiving the following delusion:

“I’m no longer feeling to be a female,” the scientists reported her saying. “I have the impression to transform into a male. My voice, for example, sounds like a male voice that moment. One time, when I looked down to my arms during this episode, these looked like male arms including male hair growth.”

Even stranger, this delusion was not limited to only herself. She also saw nearby women as becoming men too. “One time another woman, a friend of mine, was in the same room, I perceived also her as becoming a male person including changing sound of her voice,” the scientists reported her saying.

Prior to these delusions, the woman was completely healthy with no history of any other mental disorders, other than symptoms of depression in her later adulthood. “The patient never experienced a similar phenomenon outside the seizures,” explained researcher Burkhard Kasper, a neuroscientist at the University of Erlangen in Germany. Anticonvulsive drugs later relieved her of these delusions and most of her other symptoms.

MRI scans revealed a tumor in the woman’s brain that was apparently linked with the seizures. The kind of tumor in question, a ganglioglioma, is generally considered benign. “We expect her to have a long life,” Kasper said.

The tumor is located in the right amygdala, with irregular activity seen in the surrounding right temporal lobe. The amygdala seems to play an important role in processing human identity, including aspects like familiarity, emotional state, and sex, and past studies revealed that electrical stimulation of the temporal lobe triggered doubt about sexual identity.

These findings suggest that brain circuits linked with perception of gender exist in the brain.

Although this is so far an isolated case, “neuroscience has learned a lot from single patients,” Kasper mentioned, pointing out cases such as “HM,” or Henry Molaison, a man whose inability to commit new events to long-term memory after brain surgery for epileptic seizures helped revolutionize science’s understanding of how memory works.

In seizure patients, brain surgery is sometimes used to implant electrodes that not only can help monitor and control the disorder, but also can stimulate cells to better understand brain circuits. While it might be ideal to use such electrodes to probe the unusual delusions seen in this woman, surgery of any kind is not called for at the moment, since drugs help keep her seizures well under control, Kasper said.

Categories: News Stories