Archive for November, 2009

Cloud security

November 27, 2009 Leave a comment

How Secure Is Cloud Computing?

Cryptography solutions are far-off, but much can be done in the near term, says Whitfield Diffie.

By David Talbot

Cloud computing services, such as Amazon’s EC2 and Google Apps, are booming. But are they secure enough? Friday’s ACM Cloud Computing Security Workshop in Chicago was the first such event devoted specifically to cloud security.

Cryptography pioneer: Whitfield Diffie, a cryptographer and security researcher, and visiting professor at Royal Holloway, University of London.
Credit: David Talbot

Speakers included Whitfield Diffie, a cryptographer and security researcher who, in 1976, helped solve a fundamental problem of cryptography: how to securely pass along the “keys” that unlock encrypted material for intended recipients.

Diffie, now a visiting professor at Royal Holloway, University of London, was until recently a chief security officer at Sun Microsystems. Prior to that he managed security research at Northern Telecom. He sat down with David Talbot, Technology Review’s chief correspondent.

Technology Review: What are the security implications of the growing move toward cloud computing?

Whitfield Diffie: The effect of the growing dependence on cloud computing is similar to that of our dependence on public transportation, particularly air transportation, which forces us to trust organizations over which we have no control, limits what we can transport, and subjects us to rules and schedules that wouldn’t apply if we were flying our own planes. On the other hand, it is so much more economical that we don’t realistically have any alternative.

TR: The analogy is interesting, but air travel is fairly safe. So how serious are today’s cloud computing security problems, really?

WD: It depends on your viewpoint. From the view of a broad class of potential users it is very much like trusting the telephone company–or Gmail, or even the post office–to keep your communications private. People frequently place confidential information into the hands of common carriers and other commercial enterprises.

There is another class of user who would not use the telephone without taking security precautions beyond trusting the common carrier. If you want to procure storage from the cloud you can do the same thing: never send anything but encrypted data to cloud storage. On the other hand, if you want the cloud to do some actual computing for you, you don’t have that alternative.

TR: What about all of the interesting new research pointing the way to encrypted search and even encrypted computation in the cloud?

WD: The whole point of cloud computing is economy: if someone else can compute it cheaper than you can, it’s more cost effective for you to outsource the computation. It has been shown to be possible in principle for the computation to be done on encrypted data, which would prevent the person doing the computing from using your information to benefit anyone but you. Current techniques would more than undo the economy gained by the outsourcing and show little sign of becoming practical. You can of course encrypt the data between your facility and the elements of the cloud you are using. That will protect you from anyone other than the person doing the computing for you. You will have to choose accountants, for example, whom you trust.

TR: If a full cryptographic solution is far-off, what would a near-term solution look like?

WD: A practical solution will have several properties. It will require an overall improvement in computer security. Much of this would result from care on the part of cloud computing providers–choosing more secure operating systems such as Open BSD and Solaris–and keeping those systems carefully configured. A security-conscious computing services provider would provision each user with its own processors, caches, and memory at any given moment and would clean house between users, reloading the operating system and zeroing all memory.

An important component of security will be the quality of the personnel operating the data centers: good security training and appropriate security vetting. A secure data center might well be administered externally, allowing a very limited group of employees physical access to the computers. The operators should not be able to access any of the customer data, even as they supervise the scheduling and provisioning of computations.

TR: Would any public-policy moves help or hurt the situation?

WD: A serious potential danger will be any laws intended to guarantee the ability of law enforcement to monitor computations that they suspect of supporting criminal activity. Back doors of this sort complicate security arrangements with two devastating consequences. Complexity is the enemy of security. Once Trojan horses are constructed, one can never be sure by whom they will be used.


Encryption and cloud computing

November 27, 2009 Leave a comment

Encryption Is Cloud Computing Security Savior

Posted by Alexander Wolfe, Nov 16, 2009 03:36 PM

I’m beginning to think that fears about cloud security are overblown. The reason: an intellectual framework is already in place for protecting data, applications, and connections. It’s called encryption. What’s evolving now, and isn’t anywhere near fully baked, is a set of agreed-upon implementations and best practices. Today’s post talks about some relevant and interesting work from Trend Micro and from IBM.


I’m beginning to think that fears about cloud security are overblown. The reason: an intellectual framework is already in place for protecting data, applications, and connections. It’s called encryption. What’s evolving now, and isn’t anywhere near fully baked, is a set of agreed-upon implementations and best practices. Today’s post talks about some relevant and interesting work from Trend Micro and from IBM.

Along with the leadership we’re seeing from Trend Micro and IBM, it’s only fair to add that most of the security vendors and cloud-service providers themselves are researching this stuff. (I’ll cover those efforts in future posts.) One impediment in writing about cloud security is that people tend to be closed-mouth, because of the seriousness of security, as per the old phrase: “If I tell you, then I’d have to kill you.”

From my perspective, as I’ve started blogging about cloud security — see “Cloud Security In Focus Amid Data Theft Fears” — I’ve begun to see up close this reluctance of experts to provide deep data dumps. (A corollary is that those who don’t know tend to be voluble.)

Quite apart from the fact that chatter is antithetical to the security and intelligence-community ethos (not always, though), there’s so much disparate activity it’s hard to get a holistic understanding of where things are headed. Thus, my funneling everything into the encryption bucket is an attempt to summarize and make some sense of where the nexus of activity lies.

So, while I’ve been hoping to pull together comprehensive posts, I can see what I’m going to have to do is offer up incomplete bits and pieces, blogging about this stuff as I get wind of it. Accordingly, here are three interesting, albeit very loosely connected, items:

Encryption is already being used

First, here’s a heads up I got from one reader (as a comment to my earlier post), about his use of encryption to secure his cloud connections:

“I can only speak from experience using Amazon Web Services since early 2006, but all the tools are there if only they are used. For instance you can have rotating keys and my favorite is private VPN’s. If you have a good working security structure in place you can now use a private VPN from within your existing system to scale cloud resources without opening your system to the outside.These are a lot of the same issues we faced when we hooked up those pesky LANs to the transactional mainframe systems via SNA gateways in the early 80’s.”


Improved cloud encryption techniques are being researched

My contacts at Trend Micro have hinted at some conceptual work they’re doing, for future delivery at an unspecific date (i.e., I want to make clear that they’re not yet talking productization) about an encryption scheme for public cloud computing. The work is based on technology acquired from Identum Ltd., a British started incubated at Bristol University, which Trend Micro acquired in 2008. Identum’s work has formed the basis for the e-mail encryption solutions currently offered by Trend.

Indentum’s encryption expertise is now in play in this cloud research. The basic, and very powerful, idea is to apply encryption agents to every virtual computing instance. Thus, every VM would have its own resident manager to ensure the proper application of encryption security resources.

The big win here is you’d have, in essence, automated application of security policies everywhere. Thus, you’d have cryptographic key management built into the process, and also not have to worry about unprotected VM instances amongst your computing resources.

Third time’s a charm

(OK, I couldn’t think of a good subhead.) As a transition between the Trend Micro item and this one on IBM, I should mention that management of cryptographic keys is by no means a trivial thing. When you think about it, all of your cloud security rests on being able to generate and hand out those keys, while keeping them out of the hands of bad guys. (Hackers aren’t going to be able to break your keys; what they’ll do to breach your security is to steal them instead.)

Which leads into the IBM research on homomorphic encryption. (See press release, IBM Researcher Solves Longstanding Cryptographic Challenge, from July.) This is very arcane stuff, but as best as I can reduce it, what this IBM breakthrough would enable is that you could send encrypted data throughout the cloud, manipulate it any way you want, and then at the end of the day, you’d still be able to decrypt it.

Currently, there are severe limitations on the operations you can perform on aencrypted data, because some of the manipulations will muck it up so that it’s no longer decryptable.

Why is this a problem? Well, you want to be work on encrypted data as long as possible without having to render it back into its plainly visible form. That way, you don’t have to mess around with keys, or, more toxically, provide those keys to users you’re not sure you trust.

The thing with this IBM research is it’s not really clear that they’ve solved the problem. The always authoritative Bruce Schneier says that the work is theoretically impressive but completely impractical. Regardless, IBM gets props for pushing things forward.

In closing, I’d like to point you to a good post from George Reese over at O’Reilly Community: Twenty Rules for Amazon Cloud Security. The basic thrust of his advice is “encrypt everything” and only allow your decrypt key to surface for the very brief instances you’re using it.

A more secure cloud

November 27, 2009 Leave a comment

Thursday, October 01, 2009
A More Secure, Trustworthy Cloud
Virtual private clouds bridge real and virtual computing infrastructure.
By Christopher Mims
After weeks of testing, is preparing to bring out of beta a service that will let customers merge their own computer systems with its cloud-computing services.
Amazon’s Virtual Private Cloud (VPC) service, currently in beta testing, integrates remote, virtual resources with physical computers, giving customers the option to use cloud computing while keeping sensitive information on one of their own machines. Amazon’s service is the latest part of a larger trend in cloud computing: creating secure connections between real and virtual machines. Similar offerings are available from other cloud-computing companies, including CohesiveFT, IBM, and Enomaly.
Cloud computing allows companies to perform feats of computation that would otherwise have been impossible, or at least prohibitively expensive. However, cloud computing has generally lacked the security features typically required by small and medium-sized enterprises.
Amazon’s technology enables cloud-based resources to appear as part of a regular local network of servers. It uses Internet Protocol Security (IPsec) to establish a secure connection with existing data centers. Servers in the cloud can then be assigned specific network addresses and mapped onto an existing network.
Previously, computer network concepts could not easily be realized within the cloud, because the network itself was not virtualized–just the processing and storage. Amazon’s VPC offering goes some way toward allowing the virtualization of this infrastructure. “I can take a machine that’s lived for 10 years at one [address] in my data center and give it that same address on Amazon,” says Patrick Kerpan, CTO of cloud-computing software vendor CohesiveFT.
One of the reasons why there has been so much demand for VPCs, says Kerpan, is that enterprise IT teams are so comfortable with legacy computer networks. “The world of network thinking–the tools, the subnets, et cetera–if you’re a networking team, you’re using skills you’ve mapped to the network in order to solve problems,” says Kerpan. “They build maps in their head and in their tools.”
However, Reuven Cohen, founder and CTO of cloud-computing company Enomaly, argues that no VPC can ever be as secure as a physically isolated network. “It provides an extra level of security from your neighbor seeing your data,” says Cohen, “but it doesn’t address one fundamental problem: the idea of trust. If you’re using Amazon, you inherently have to trust them.”
James Comfort, vice president of integrated delivery platforms at IBM, says that VPCs are only one solution in a spectrum of potential secured cloud offerings. “VPC is a bit of a misnomer,” says Comfort. “In our mind, the difference between the private and the public cloud is a business model.” The difference is that a private cloud is run internally by a company, solely for its own use, while a public cloud consists of leased resources from a cloud service provider.
For large companies, it may be safer, and cheaper, to rely entirely on internal infrastructure. According to a McKinsey & Company report issued in April, moving a large company’s data center architecture to a cloud-computing platform can as much as double costs.
For small and medium enterprises, however, virtual private cloud offerings from Amazon and others may prove more attractive. “You can tell customers–millions of IT people worldwide–you need to relearn everything [so that you can move your infrastructure to the cloud,] or you can make the migration as easy as humanly possible,” says Kerpan. “If people have learned a set of skills, we try to figure out how we can make it natural for them to continue to use those skills.”
Copyright Technology Review 2009.

3D maps on the move

November 27, 2009 Leave a comment

Wednesday, November 18, 2009
Making 3D Maps on the Move
A vehicle uses off-the-shelf components to build 3D maps of an area.
By Kristina Grifantini
At a robotics conference last week, a vehicle called ROAMS demonstrated a cheap approach to mobile map-making.
ROAMS(Remotely Operated and Autonomous Mapping System) was created by researchers at the Stevens Institute of Technology in Hoboken, NJ, with funding from the U.S. Army. It uses several existing mapping technologies to build 3D color maps of its surroundings, and it was demonstrated at the 2009 IEEE conference on Technologies for Practical Robot Applications in Woburn, MA.
The system uses LIDAR (Light Detection and Ranging), which involves bouncing a laser off a rapidly rotating mirror and measuring how the light bounces back from surrounding surfaces and objects. The same technology is already used to guide autonomous vehicles, to make aerial maps, and in spacecraft landing systems.
A conventional 3D LIDAR system, which consists of several lasers pointing in different directions, costs over $100,000. The Stevens researchers created a cheaper mapping system by mounting a commercial 2D LIDAR sensor, which costs about $6,000, on a pivoting, rotating framework atop the vehicle. While the system has a lower resolution than a regular 3D LIDAR, it could still be used for low-cost architectural surveying and map making in military situations, the researchers say. “The prototype system is around $15,000 to $20,000,” says Biruk Gebre, a research engineer at Stevens who demonstrated the device.
The system takes about 30 seconds to scan a 160-meter-wide area. A color camera also on the rotating frame provides color information that is added to the map later on. And the Stevens researchers developed a way to maintain the same resolution by automatically adjusting the scanning process depending on the proximity of objects. A human operator rides in a larger vehicle that follows the robotic one from up to a mile away, says Kishore Pochiraju, professor and the director of the Design and Manufacturing Institute at Stevens. Ultimately, says Pochiraju, “we want to leave this robot in a location and ask it to generate a complete map.” Such a vehicle could, for example, drive into a dangerous area and generate a detailed map for military personnel.
“They’re using a relatively low-cost system,” says John Spletzer, an associate professor at Lehigh University who uses similar technology to create autonomous wheelchairs. “There’s a lot of groups working on it; it’s pretty interesting.”
Nicholas Roy, an associate professor at MIT who develops autonomous and self-navigating vehicles, also notes that other research groups have developed similar technology. He says that the biggest challenges in autonomous map-making are identifying obstacles and sharing mapping between several robots.
Copyright Technology Review 2009.

Map maker: This vehicle uses a rotating laser and video camera to generate 3D maps of its environment.
Credit: Stevens Institute of Technology

Categories: News Stories

Cool tech options for cars

November 27, 2009 Leave a comment

9 cool tech options for your car
Cars that park themselves. Driver-passenger split screen computers. Night vision. Just a few of the innovations that make driving easier, safer and more fun.
Ford: Active Park Assist

Ford Flex and Escape, Lincoln MKS and MKT
Ford isn’t the first to take a crack at the self-parking car, but it is the first to make it genuinely useful. Ford’s Active Park Assist can actually maneuver your car into a parallel parking space in less time and with less hassle than doing it yourself.
And it’ll probably do a better job of getting the car into the space. Plus, it works in a variety of conditions. You don’t have to wait for a space on perfectly level ground with good lighting.
All you have to do is press a button and slow down to less than about 20 mph. Sonar sensors on the side of the car scan for a viable space. When one is found, just pull forward until the car tells you to stop. Then put it in reverse, take your hands off the steering wheel and back up slowly. The car handles the rest like a pro.
Of course, it’s up to you make sure the space is legal. Ford isn’t going to pay your parking tickets.
by Peter Valdes-DapenaM, senior writer

NEXT: Mercedes-Benz: SplitView screen
Last updated November 13 2009: 4:49 PM ET


Find this article at:

Mercedes-Benz: SplitView screen

Mercedes-Benz S-class in early 2010
By now, you’re probably familiar – even if you don’t have one yourself – with the computer screen many new cars have between the front seats. In order to protect the driver from distraction, such screens can’t show movies, for instance, or even allow you to enter navigation destinations or other complex jobs while the car is moving.
But what about the passenger? There’s little danger in distracting the passenger. In fact, the passenger might actually like some distraction. Starting in January, the Mercedes-Benz S-class will be available with a screen – a single screen – that shows separate views to the driver and the front-seat passenger. It’s sort of like those baseball cards that change images as you turn them in your hand, but the effect is much better.
Nissan: Around View Monitor

Infiniti EX35
Rearview back-up cameras are very useful, but aren’t much good when it comes to squeezing around, say, a double parked truck on a narrow street. Some vehicles now offer side and front-view cameras that can help, but they give distorted wide-angle images. That makes it difficult to judge just how close you are to gouging a fender or cracking a turn signal lamp.
The Around View Monitor, offered on Nissan’s Infiniti EX, uses simple computation to solve the problem. It takes images from four wide-angle cameras – one on each side and each end of the car – digitally flattens them and combines them into what appears to be an aerial view of your own vehicle. With this, you can easily see where your car is in relation to everything nearby. It is like having your own private satellite hovering over your car and beaming down images as you drive around that truck.
Ford: EcoBoost

Ford Taurus and Flex, Lincoln MKT and MKS
The company that invented the mass-market V8 engine in the 1930s – because Henry Ford insisted, for some reason, that cylinders must come in multiples of four – has finally come up with its replacement.
Ford’s EcoBoost V6 engines use two turbochargers combined with a complex computer-controlled fuel injection system to produce the power of larger V8 engines. What’s more, these EcoBoost engines use no more fuel than Ford’s non-boosted engines of the same size. And they’ll run just fine on regular gasoline, although you’ll need premium fuel for maximum power.
Another benefit of this system, besides the power output, is how quickly that power is delivered. Because the engines are relatively small, they get to full throttle more quickly, delivering their maximum pulling power almost as soon as you press down on the gas pedal.
Coming soon: EcoBoost 4-cylinder engines that deliver like V6s
9 cool tech options for your car
5 of 9

BMW: Night vision

BMW 7-series
Night vision systems in cars would seem to be of limited use since most roads are well lit and, besides, other cars have lights on them, too. But pedestrians don’t, and that’s where BMW’s system proves its worth. Infrared cameras scan the road ahead and computers that are programmed to recognize human shapes point out pedestrians in or near the road.
The system not only recognizes people ahead. It also indicates what direction they’re heading. If people to the side of road are facing as if they’re about to step into your path, the car will alert you.
Toyota: Lane Keep Assist

Toyota Prius, Lexus HS250h
Plenty of cars today have what’s known as “active cruise control.” Unlike typical cruise control systems that allow you to simply set a speed, these systems use radar to scan the road ahead for slower moving vehicles. Your car will then automatically slow down to maintain a safe following distance.
A lof of cars also have lane departure warning systems. These use cameras to find lane markings on either side of the car. They emit a warning if you’re about to drift out of your lane.
Lexus’ Lane Keep Assist takes these systems a step further. When you’re using active cruise control, the Lexus LS 250h will also use its lane departure warning system to not only warn you that you are drifting but to actually correct your path. If your car starts to drift out of its proper lane, the car will gently steer itself back to the center.
It’s not too insistent, of course. If you really want to leave your lane without using a turn signal you just need to apply a little muscle to the steering wheel.
Ford: Work Solutions

Ford F-series trucks. E-series vans and Transit Connect vans
For people who simply drive their cars, and don’t practically live in them, Ford offers Sync, a system that ties your cell phone and MP3 player in with the car’s stereo and a very easy-to-use voice recognition system. But for those who use their truck or van as mobile offices and tool sheds, Ford now has a system called Ford Work Solutions.
One feature, called Tool Link, allows you to put radio-frequency ID tags on your tools and equipment. RFID scanners in the back of the van or the bed of the truck can then tell you whether you have all the tools you need for a given job or if you’ve left any behind at the job site.
The in-dash computer provides, as the name implies, a computer complete with spreadsheet, word processing and presentation software built right into the truck. There is also a remote keyboard. Besides all the onboard software, you can also access a computer somewhere else – say, the one on your desk at the office – right from the truck.
Additionally, there are systems for fleet management that allow you to check, from your truck, on all the other vehicles in your fleet.
GM: Pause-and-Play Radio

Various recently introduced or redesigned Cadillac, Chevrolet, GMC and Buick models
This one of those handy features that you could have in your car for years and never notice, all the while needlessly missing hours of great radio shows and ball games.
On several of General Motors’ newer cars and SUVs, higher-end stereos have a Pause/Play button along with the usual controls. You’d probably just figure it is for the CD and MP3 players.
In fact, it has special powers. When you arrive at your home or the store and you’re in the middle of a radio show, you can just hit Pause. The sound stops as the car records the show. When you get back to the car up to a half hour later, you can just hit the button again and hear everything you missed.
Toyota: Remote touch

Lexus HS, RX
Luxury car makers have been trying for years to find elegant ways to interact with increasingly complex on-board navigation and entertainment systems. Up to now the choices have been knobs that you spin, wiggle and press, like on Mercedes-Benz or BMW, or touch screens, like those on Jaguars and various non-luxury brands.
The knobs can be confusing but the disadvantage of touch screens is, well, finger grease and, because they have no tactile feedback, they require you to look at the screen more.
The answer from Toyota’s Lexus division is the Remote Touch control. It works just like the familiar computer mouse you’re probably using now. Move it around until the pointer gets to something you want to click on, then press a button with your thumb.
It’s better than a mouse, however, because it allows you to actually “feel” the screen. As your pointer passes over something you can click on, it sticks there for a bit, as if you’re sliding an iron bar over a magnet. That way, your eyes can spend more time on the road and less on the screen.

Categories: News Stories

Issues in buildup in Afghan

November 27, 2009 Leave a comment

Afghanistan Crippled by Lack of Runways, Facilities
Posted by David A. Fulghum at 10/30/2009 7:45 AM CDT

The buildup of manned and unmanned aircraft for operations in Afghanistan is being crippled by a lack of bases, aviation ramp space, personnel and sensors that can deal with terrain that bears almost no resemblance to Iraq, says a senior Pentagon planning official who is providing equipment for both Iraq and Afghanistan.

Also jeopardizing the mission are limited infrastructure, housing, specialized facilities and high-altitude runways. The last requires smaller gross takeoff weights and longer takeoff distances.

There appears to be no delay in reaching the Pentagon-mandated 50 orbits of Predator UAVs for the theater. What is not available is a concept of operations that would divide those capabilities between the two theaters. One clue about the eventual allocation is that since Afghanistan has fewer air bases and less parking and service space, logic dictates that smaller aircraft – such as UAVs – would be concentrated there.

By comparison, Iraq is topographically flat, for the most part, and had has scores of military airfields at sea level with long runways which makes operations easier for large manned aircraft like the RC-135W Rivet Joint signals/communications intelligence and E-8C Joint Stars radar ground surveillance aircraft.

Often procedures and flight geometries that work in Iraq, don’t do well in Afghanistan where high mountains, steep slopes and deep valleys require new flight profiles for optimum surveillance, particularly of small groups of people moving in broken terrain. In addition, with a new foe, “there are constantly emerging, unique targets” that aren’t suited for wide area surveillance systems, he says.

For the troops arriving in Afghanistan, commanders are already calling for full motion video, precision signals intelligence and ground moving target indicator radar with enough resolution to track people, referred to as “dismounts” moving at speeds well below 4 mph. The E-8C has been a stalwart of ground moving target indicator (GMTI) operations in Iraq, but it could lose its primacy in Afghanistan to smaller, more flexible designs.

Equipment is flowing into the main bases of Kandahar and Bagram (where the classified Area 84 is growing exponentially) at a rate that scares some U.S. Army officials. They have publicly complained (at the recent Old Crows Association show) that at Bagram Air Base alone there are 200 systems that can’t communicate with one another. Critics predict that the polluted electronic environment around Baghdad – which has slashed the range of data links and foiled the coverage of some radars and IED jammers – is quickly being duplicated in Afghanistan.

The initial need is for a unique “concept of operations that flow back to operations and integrate into the ISR architecture,” the Pentagon official says. For example, “the operational piece might be to integrate the signature of people walking with a positive identification on the same platform. We can’t do that now.

The SYRES III EO systems being looked at for Joint Stars may be a potential solution, “but in its present form, it doesn’t make sense with the flight geometries needed for dynamic terrain,” the Pentagon official says. An alternative could be to move full motion video via datalink to Joint Stars operating as a node in a network to make positive IDs of radar imagery. Moreover,  the resolution of the GMTI is no where close to what is required in Afghanistan,” the Pentagon official says.

Darpa’s Vader radar pod for synthetic aperture radar and ground moving target indicator imagery – designed for manned and unmanned aircraft – is another option being introduced for use in Afghanistan

Categories: News Stories

Time of day impact on colonoscopy

November 27, 2009 Leave a comment

November 17, 2009
Vital Signs
Screening: One More Reason to Get Up Early
There are probably better ways to start the day, but a new study suggests that early morning is an ideal time to schedule a colonoscopy.
Physicians detected 20 percent more polyps during the first procedures of the day than they did during procedures performed later in the morning and the early afternoon, the study found.
“Hour by hour, there were fewer polyps found as the day progressed,” said Dr. Brennan M. R. Spiegel, an assistant professor of medicine at the University of California, Los Angeles and an author of the study, which appears in the November issue of the journal Clinical Gastroenterology and Hepatology. “It’s a small effect, very small, but very measurable and definitely there.”
A study at the Cleveland Clinic, published this year, found similar results, noting that 29.3 percent of morning procedures resulted in detection of at least one polyp, compared with 25.3 percent of those in the afternoon.
The new study looked at the results from 477 colonoscopies at the West Los Angeles Veterans Medical Center in 2006 and 2007. Most of the procedures were performed by a physician training in gastroenterology who was supervised by a faculty member.
Procedures occurred from 7:45 a.m. to 1 p.m. The researchers tried to control for other factors that might have affected the results, like the fact that patients usually came in with better bowel preparation for morning procedures. Dr. Spiegel suggests that fatigue may affect physicians.

Categories: News Stories