Engineers eLibrary
Engineers and Natural Sciences elibrary




Guest online
1
News Categorys


News Archive




Tuesday, May 6, 2014

INTERPHEX 2014 Puerto rico   Print



Medical Device INTERPHEX PR 2014

Medical Device Puerto Rico brings together industry professionals - who are involved in the manufacturing and packaging of Medical Devices and instruments - looking to source new products and services that enhance human life and health.

Product Categories:

  • Adhesives, Coatings, Sealants
  • Components and Parts
  • Disposable Device Components
  • Electronic Components
  • Cleanrooms and Environmental Controls
  • Manufacturing Equipment
  • Plastics/Elastomers
  • Production/Assembly Equipment & Software
  • Packaging
  • Pumps & Valves
  • Sterilization Equipment
  • Testing & Instrumentation
  • Tubing Products and Materials

Attend/Visit?Exhibit?


Posted by enriqueweb123 date 05/06/14 12:43 AM 12:43 AM  Category FDA

comments (0)  
0:1399312060:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Tuesday, April 29, 2014

Home for consumer updates   Print



FDA Gives Latinas Tools to Fight Diabetes

Latina woman and doctor

Red envelope icon for Govdelivery Get Consumer Updates by E-mail

RSS feed orange symbol Consumer Updates RSS Feed

pdf icon small Print & Share (PDF 176 K)

En Español

On this page:

Nearly 1 million Latinas aren’t aware that they are at risk of developing diabetes. Many will likely not get the preventive or other care they need because they won’t visit a physician or medical clinic, where they could take a simple screening test for the disease.

To help prevent the spread of diabetes, the Food and Drug Administration (FDA) offers resources to help women of Latin American ancestry and all Americans reduce their risk or to find the most effective treatment.

About 5.5 million Latinas have elevated fasting plasma glucose, and of those, nearly 4 million weren’t told by a health care professional that they were at risk for diabetes, according to a study in the March 2014 edition of Hispanic Health Care International.

The study, "Latinas With Elevated Fasting Plasma Glucose: An Analysis Using NHANES 2009-2010 Data,” was co-authored by Helene Clayton-Jeter, O.D., an optometrist and health programs manager at FDA. NHANES is the National Health and Nutrition Examination Survey, which assesses the health and nutritional status of U.S. adults and children.

Fasting plasma glucose is the level of sugar in the blood after a fast of at least eight hours, indicating how the body processes sugar. For people with diabetes, their glucose levels may remain high even after they haven’t eaten for hours, says Bruce Schneider, M.D., an endocrinologist at FDA. This is an indication that a patient either has diabetes or is in danger of developing the disease.

Why are so many Latinas at risk of developing diabetes? In addition to susceptibility, many people are not routinely visiting their primary care physician or gynecologist and aren’t aware of their diabetes risk, according to the study. Thus, they aren’t getting the diabetes screening and medical care they need. Early screening and proper care can delay or prevent diabetes in at-risk people.

Overall, diabetes affects nearly 26 million Americans (8.3% of the population). In addition, about 79 million adults (35%) are at risk of developing diabetes. According to the study, there’s a significantly higher proportion of undiagnosed diabetes among Mexican American adults (34.6%) than non-Hispanic whites (17.1%) and non-Hispanic blacks (15.7%).

Detection Is Key

The researchers studied almost 1,500 Hispanic, non-Hispanic white and non-Hispanic black women. They found that fear of and cultural misperceptions about diabetes put Latinas at considerable risk for developing long-term complications of diabetes.

"Early detection helps even the playing field,” says Clayton-Jeter. "As doctors, we need to talk with our patient about diabetes no matter what she is coming in for. Doctors shouldn’t be so focused on the chief complaint that we forget to look at the whole person. We should also look into the whole family, not only because diabetes runs in families but because families live together, eat together and are connected in so many ways.”

The screening test involves the taking of a small blood sample, which is a routine procedure, says Schneider.

"It’s a very easy test to conduct, and it should be part of a routine physical exam – even if you are perfectly healthy,” he says. "Like getting your blood pressure measured, this test is very important because diabetes is often a silent destroyer of multiple organs.”

Complications of diabetes include heart attack, stroke, blindness and kidney disease. Normalizing blood glucose levels through diet, exercise and medications can help prevent many of those complications.

To increase the rate of diabetes detection in Latinas, the study recommends making the test more convenient—via mobile health vans, places of worship and other non-traditional sites. "We need to break down barriers to access so more women can take this simple finger-prick test outside a doctor’s office—at pharmacies, gyms, health centers, dental offices and eye clinics,” Clayton-Jeter says.

Many Latinas don’t see a doctor until they develop symptoms or are in a crisis, she says, adding that optometrists detect many cases of undiagnosed diabetes.

"That’s because when your sugar level is too low or elevated, it affects your vision. And when people aren’t seeing clearly, they typically first go to the eye doctor to seek an eyeglass prescription to correct this,” she says. "Diabetes can be silent. There is usually no pain. The first obvious sign of it is often blurred vision.”

Other warning signs of diabetes include increased thirst, frequent urination, sores that don’t heal (usually on the hands and feet) and unexplained weight loss. For women, signs can include an increase in yeast infections and other gynecological issues not associated with bacteria, she says.

FDA Resources

FDA has created many tools on www.fda.gov for Latinas and everyone to inform to public, and to prevent and treat diabetes. They include:

This article appears on FDA's Consumer Updates page, which features the latest on all FDA-regulated products.

April 22, 2014

back to top

Page Last Updated: 04/22/2014 
Note: If you need help accessing information in different file formats, see Instructions for Downloading Viewers and Players.


Posted by enriqueweb123 date 04/29/14 02:57 PM 02:57 PM  Category FDA

comments (0)  
0:1398801606:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Sunday, December 8, 2013

Returning FDA to Regulate Tobacco   Print


innovaeditor/assets/admin/tobaco.jpg

Product Requirements, Marketing & Labeling

Learn how to legally market a tobacco product. Find info on promotions and labeling.

Guidance, Regulations & Compliance

Regulatory documents, public comments, warning letters, & Tobacco Control Act

News & Events

Get the latest from FDA’s Center for Tobacco Products (CTP), including press releases, fact sheets, and information on upcoming events, meetings, and conferences.

Youth & Tobacco

Protecting kids, regulations, & illegal sales

Public Health, Science & Research

Public Health Information, Health Fraud, Electronic Cigarettes, Menthol, HPHCs

Resources for You

For Consumers, Retailers, Manufacturers, Researchers, Health Professionals and State, Local, Tribal and Territorial Governments


Posted by E.Pomales P.E date 12/08/13 06:28 PM 06:28 PM  Category FDA

comments (0)  
0:1386549020:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Sunday, December 8, 2013

Science and Research (Medical Devices)   Print


innovaeditor/assets/admin/ucm217505.gif

A core function of CDRH is to advance regulatory science, the science of developing new tools, standards and approaches to assess the safety, efficacy, quality, and performance of medical devices and radiation-emitting products. Science at CDRH includes laboratory and field research in the areas of physical, life, and engineering sciences as well as epidemiological research in postmarket device safety. Research is conducted in FDA laboratories and through collaborations with academia, healthcare providers, other government agencies and industry. CDRH relies upon this work to support its efforts ensuring public safety in areas as varied as medical imaging, medical device software, breast implants, and drug eluting stents.

Posted by E.Pomales P.E date 12/08/13 06:23 PM 06:23 PM  Category FDA

comments (0)  
0:1386548744:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Wednesday, November 27, 2013

Ready for Obama Care   Print


innovaeditor/assets/admin/131127134753-obama-turkey-pardon-2013-horizontal-gallery.jpg

Get your daily scoop of what State of the Union is watching today, November 27, 2013.

1. Open minds on Obamacare.  Despite the Affordable Care Act's disastrous rollout, there appears to be a silver lining. While the new CNN/ORC poll finds that 58 percent oppose Obamacare compared to 40 percent who support it, 54 percent also believe that problems facing the law will eventually be solved, including seven out of 10 younger Americans. In addition, 53 percent say it's too early to judge if the law a success or failure. Also, behind the health care law's opposition numbers, 14 percent say it's not liberal enough and doesn't go as far as it should.

2. And yet, another delay. With a November 30th deadline to have the Healthcare.gov website fixed, the White House announced it is postponing another key aspect of Obamacare.  Online enrollment for small businesses to get insurance for their employees through the federal marketplace is being delayed for one year. Administration officials say they had to focus on repairing the website's basic functions and could not address problems relating to the small business exchange. John Arensmeyer, chief executive for the advocacy group Small Business Majority, told the New York Times the delay was "disappointing," and said it was "important to get the small business marketplace up and running as soon as possible."

3. On the road to 2016.  Texas Republican Gov. Rick Perry will make a campaign-style swing through South Carolina next week. He'll give the keynote address at a state Republican Party banquet Tuesday, as well as make several other stops in the state. South Carolina holds the nation's first southern primary for both Republicans and Democrats and Perry is no doubt hoping to firm up early support among the state's influential evangelical voting bloc. Perry, who has expressed an interest in making a second bid for the White House, visited Iowa earlier this month.

4. Dead heat in Buckeye country. Yes it's way early and it's a hypothetical matchup, but we couldn't resist. A new poll finds Hillary Clinton and N.J. Gov. Chris Christie tied in a potential 2016 matchup in the crucial swing state of Ohio. The Quinnipiac University survey finds 42 percent of registered voters support Clinton, while 41 percent backing Christie. The Ohio poll is the latest to show Christie gaining ground on the former Secretary of State in a swing state.

For the first time since the late 19th century, Thanksgiving and the first day of Hanukkah will fall on the same day.  The State of the Union staff wishes you a very safe and happy Thanksgiving and Hanukkah, or if you prefer, Happy Thanksgivukkah!

SOTU bonus poll: Do you agree with retailers starting Black Friday shopping on the Thanksgiving holiday?
Yes

No


Posted by E.Pomales P.E date 11/27/13 03:59 PM 03:59 PM  Category FDA

comments (1)  
0:1385589868:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Wednesday, November 27, 2013

Obamacare   Print



Obama pardons a turkey too.

Posted by E.Pomales P.E date 11/27/13 03:49 PM 03:49 PM  Category FDA

comments (0)  
0:1385589311:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 18, 2013

Is General Electric Irrelevant?   Print



It recently noticed a headline asking, "Is GE Becoming Irrelevant?” The gist of the article involved GE from an investment point of view, and maybe in that context it is becoming less stellar as an investment. However, it is not, in terms of its industrial focus and most likely its Industry 4.0 future.

The company just unveiled 14 new Industrial Internet Predictivity technologies (no this is not a typo) to achieve, "virtually anywhere,” access to machines. Focusing on causing an impact on such important industrial concerns as reducing downtime], providing preventive maintenance, reducing emissions and fuel costs, and increasing productivity.

Why the strange spelling? The GE solutions that are streaming out will be powered by Predix, an industrial-strength platform destined to provide both standardization and security to the connection of machines, industrial big data, and the decision-making people that are involved.

Doesn’t this look like another move to Industry 4.0 without the travel? (See: Want a Great Industrial Engineering Job? Move to Germany.)

Partnerships to help GE pull Predictivity off and boost wired and wireless connectivity include AT&T, Cisco, and Intel. By the company’s own admission, it’s the management and analysis of big-data output within a secure environment that is both the biggest challenge and opportunity -- and these three partners should lend credibility and muscle to the effort.

GE seems to be in a great position for success in predictive solutions. Given the company’s sensor technology and industrial focus, it has the experience, products, and now partners in place not to mention sufficiently deep pockets. According to Jeff Immelt, GE chairman and CEO, "We are developing more predictive solutions and equipping our products with sensors that constantly measure performance so our customers see major productivity gains and minimize no unplanned downtime. Observing, predicting and changing this performance is how the Industrial Internet will help airlines, railroads and power plants operate at peak efficiency."

Just unveiled in the asset optimization category are:

  • The Drilling iBox System for oil and gas
  • ReliabilityMax, also for oil and gas
  • Field360 (oil and gas)
  • LifeMax® Advantage (power and water)
  • PowerUp (power and water)
  • Rail Connect 360 Monitoring and Diagnostics (transportation)

In the operations optimization column:

  • Non-destructive Testing Remote Collaboration (oil and gas)
  • Hof SimSuite (healthcare)
  • Cloud Imaging (healthcare)
  • Grid IQ™ Insight (energy management)
  • Proficy MaxxMine (energy management)
  • Flight Efficiency Services (aviation)
  • ShipperConnect (transportation)

The 24 solutions now available provide flexibility for machine management and operations.

GE might be old hat to the investment community, after all it was added to the Dow Jones Industrial Averages in 1907. And, while it may have been a stagnant investment to the financial community, it seems a good bet for the industrial segment going forward.


Posted by E.Pomales P.E date 10/18/13 08:10 AM 08:10 AM  Category Industrial Engineering

comments (65)  
0:1382101897:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 18, 2013

Cell-Free Biomanufacturing for Cheaper, Cleaner Chemicals   Print



Biotech startup Greenlight Biosciences has a cell-free approach to microbial chemical production.

Biotechnologists have genetically engineered bacteria and other microbes to produce biofuels and chemicals from renewable resources. But complex metabolic pathways in these living organisms can be difficult to control, and the desired products can be poisonous to the microbes. What if you could eliminate the living cell altogether?

Greenlight Biosciences, a Boston-area startup, engineers microbes to make various enzymes that can produce chemicals and then breaks open the bugs to harvest those enzymes. The scientists don’t have to go to the trouble of isolating the enzymes from the other cellular material; instead, they add chemicals to inhibit unwanted biochemical reactions. By mixing slurries based on different microbes with sugars and other carbon-based feedstocks, the company can generate complex reactions to produce a variety of chemicals. Greenlight say its technology enables the company to make cheaper versions of existing chemicals and has already produced a food additive, drug products, and pesticides and herbicides.

The biggest motivation in starting the company was to figure out how to produce such compounds in a more environmentally friendly way, says CEOAndrey Zarur. But Greenlight’s products also have to be cheaper than those produced by chemical- or cell-based manufacturing, he says, or industries will be reluctant to use them.

Greenlight’s strategy is a departure from classic fermentation processes that depend on vats of living microbes. It is also unlike a different approach to genetic engineering, often called synthetic biology, that tweaks the pathways in microbes so that they are optimized to fabricate desirable compounds. Several companies are engineering bacteria and yeast to produce specialty chemicals, but for the most part, these groups keep the bugs alive. Amyris, for example, can make biofuels, medicines, and chemicals used in cosmetics and lubricants by engineering microbes with new sets of enzymes that can modify sugars and other starting materials (see "Amyris Announces Commercial Production of Biochemicals” and "Microbes Can Mass-Produce Malaria Drug”). Metabolix has engineered bacteria to produce biodegradable plastic (see "A Bioplastic Goes Commercial”).

A problem with that strategy is that when bacteria and other microbes are turned into living chemical factories, they still have to put some resources into growing instead of chemical production, says Mark Styczynski, a metabolic engineer and systems biologist at the Georgia Institute of Technology. Furthermore, even in a seemingly simply bacterium, metabolism is complicated. "Metabolic pathways have complex regulation within them and across them,” he says. Changing one metabolic pathway to improve chemical production can have broad and sometimes negative consequences for the rest of the cell.

Thus, separating the production pathway from the needs of the cell could be a huge advantage, he says. Greenlight doesn’t completely avoid microbes. In the company’s sunny lab space north of Boston, researchers use bubbling bioreactors to grow bacteria in liquid culture, maintaining different species and strains that can produce a variety of enzymes. Once the bugs have reached a certain density, the researchers send them through a high-pressure extruder to break them into pieces. Then they add drugs to the resulting gray slurry to turn off most of the cells’ metabolic enzymes; the useful enzymes are unaffected because they have been engineered to resist the drugs.

The technology that keeps the exposed metabolic pathways working was developed by James Swartz, a biochemical engineer at Stanford University who left his position as a protein engineer at the biotechnology company Genentech to develop cell-free methods for producing pharmacological proteins (insulin is an example of a druglike protein that can be produced by biotechnology). Seeking more control over the biological machinery that produces proteins, Swartz figured out how to give that machinery the biochemical environment it needed even outside its normal home in a cell. Not only did his methods enable him to make more complex proteins, but it turned out they could also be used to control biological machinery to make small molecules and chemicals. "We’ve found that by reproducing the chemical conditions that occur inside the cell, we activate a lot of metabolic processes, even ones people thought were too complicated,” he says.

Greenlight can troubleshoot and tweak the metabolic production of chemicals in ways more akin to chemical engineering than anything found in typical microbial engineering. The cell-free slurries are active for 96 hours before the enzymes begin to break down. At that point, a new batch of microbes must be grown.

"One of the beauties of the cell is it is self-replicating,” says David Berry, a principle at Flagship Ventures, who cofounded the biofuels companies LS9 and Joule Unlimited. (Berry is a 2007 MIT Technology Review Innovator Under 35.) But even though a cell-free system misses out on that advantage, there are other benefits, such as superior flexibility. "There is the potential to work with more inputs and to work around situations where certain pathways currently don’t work because of the needs of the cell,” Berry says.

Zarur says Greenlight could have its first product on the market at the beginning of next year. It will be a food supplement with health benefits, he says.

The company has also received a $4.5 million grant from ARPA-E to develop a system for converting methane, the main ingredient in natural gas, to liquid fuel. The agency says such technology could "enable mobile fermenters to access remote sources of natural gas for low-cost conversion of natural gas to liquid fuel.”


Posted by E.Pomales P.E date 10/18/13 08:05 AM 08:05 AM  Category Chemical Engineering

comments (0)  
0:1382101674:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 18, 2013

Researchers Advance Toward Engineering 'Wildly New Genome'   Print



Oct. 17, 2013 — In two parallel projects, researchers have created new genomes inside the bacterium E. coli in ways that test the limits of genetic reprogramming and open new possibilities for increasing flexibility, productivity and safety in biotechnology.
Wyss Institute file photo of E. Coli. (Credit: Rick Groleau)In one project, researchers created a novel genome -- the first-ever entirely genomically recoded organism -- by replacing all 321 instances of a specific "genetic three-letter word," called a codon, throughout the organism's entire genome with a word of supposedly identical meaning. The researchers then reintroduced a reprogramed version of the original word (with a new meaning, a new amino acid) into the bacteria, expanding the bacterium's vocabulary and allowing it to produce proteins that do not normally occur in nature.

In the second project, the researchers removed every occurrence of 13 different codons across 42 separate E. coligenes, using a different organism for each gene, and replaced them with other codons of the same function. When they were done, 24 percent of the DNA across the 42 targeted genes had been changed, yet the proteins the genes produced remained identical to those produced by the original genes.

"The first project is saying that we can take one codon, completely remove it from the genome, then successfully reassign its function," said Marc Lajoie, a Harvard Medical School graduate student in the lab of George Church. "For the second project we asked, 'OK, we've changed this one codon, how many others can we change?'"

Of the 13 codons chosen for the project, all could be changed.

"That leaves open the possibility that we could potentially replace any or all of those 13 codons throughout the entire genome," Lajoie said.

The results of these two projects appear today in Science. The work was led by Church, Robert Winthrop Professor of Genetics at Harvard Medical School and founding core faculty member at the Wyss Institute for Biologically Inspired Engineering. Farren Isaacs, assistant professor of molecular, cellular, and developmental biology at Yale School of Medicine, is co-senior author on the first study.

Toward safer, more productive, more versatile biotech

Recoded genomes can confer protection against viruses -- which limit productivity in the biotech industry -- and help prevent the spread of potentially dangerous genetically engineered traits to wild organisms.

"In science we talk a lot about the 'what' and the 'how' of things, but in this case, the 'why' is very important," Church said, explaining how this project is part of an ongoing effort to improve the safety, productivity and flexibility of biotechnology.

"These results might also open a whole new chemical toolbox for biotech production," said Isaacs. "For example, adding durable polymers to a therapeutic molecule could allow it to function longer in the human bloodstream."

But to have such an impact, the researchers said, large swaths of the genome need to be changed all at once.

"If we make a few changes that make the microbe a little more resistant to a virus, the virus is going to compensate. It becomes a back and forth battle," Church said. "But if we take the microbe offline and make a whole bunch of changes, when we bring it back and show it to the virus, the virus is going to say 'I give up.' No amount of diversity in any reasonable natural virus population is going to be enough to compensate for this wildly new genome."

In the first study, with just a single codon removed, the genomically recoded organism showed increased resistance to viral infection. The same potential "wildly new genome" would make it impossible for engineered genes to escape into wild populations, Church said, because they would be incompatible with natural genomes. This could be of considerable benefit with strains engineered for drug or pesticide resistance, for example. What's more, incorporating rare, non-standard amino acids could ensure strains only survive in a laboratory environment.

Engineering and evolution

Since a single genetic flaw can spell death for an organism, the challenge of managing a series of hundreds of specific changes was daunting, the researchers said. In both projects, the researchers paid particular attention to developing a methodical approach to planning and implementing changes and troubleshooting the results.

"We wanted to develop the ability to efficiently build the desired genome and to very quickly identify any problems -- from design flaws or from undesired mutations -- and develop workarounds," Lajoie said.

The team relied on number oftechnologies developed in the Church lab and the Wyss Institute and with partners in academia and industry, including next-generation sequencing tools, DNA synthesis on a chip, and MAGE and CAGE genome editing tools. But one of the most important tools they used was the power of natural selection, the researchers added.

"When an engineering team designs a new cellphone, it's a huge investment of time and money. They really want that cell phone to work," Church said. "With E. coli we can make a few billion prototypes with many different genomes, and let the best strain win. That's the awesome power of evolution."


Posted by E.Pomales P.E date 10/18/13 08:01 AM 08:01 AM  Category General

comments (0)  
0:1382101446:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Wednesday, October 16, 2013

Ford Demos Emergency Autosteering   Print



Ford

Fall asleep at the wheel of the right prototype car and it will steer you around obstacles. That's what Ford's demonstration of an obstacle avoidance system at its proving ground near Lommel, Belgium, this week implies. But it won't be ready for a long time. Ford took advantage of the attention its prototype drew to announce its full parking-assistance technology, which is mature enough that it might be in your next car and wins hands-down against the autosteering for clever advertising.

Both obstacle avoidance and the more mundane parking assistance are part of the larger trend toward greater autonomy in road cars, as IEEE Spectrumnoted at the Frankfurt Motor Show last month. The technologies exist along a spectrum from the simplicity of 20th-century cruise control to features that take over momentarily from bad drivers to the sort of autonomy that would turn drivers into passengers, able to sleep or read an issue of Spectrum without worrying about traffic.

Driver assistance on the market today tends to focus on avoiding impending collisions by detecting obstacles and alerting the driver or even hitting the brakes. Steering around obstacles, such as the Ford demonstrator did in Lommel, is still a nascent technology. "The big jump is now to take over control of the car in the longitudinal and latitudinal directions," says BMW's head of driver assistance and perception Werner Huber. BMW has already introduced dynamic cruise control, which slows the car down as you start to overtake slowpokes. And it's introduced warnings that keep you from drifting out of your lane. Taking over the steering wheel to avoid a crash should be in the next batch of driver assistance packages, Huber says.

Car sensors can now look up to 200 meters ahead and distinguish between routine traffic and obstacles, so writing algorithms clever enough to keep up with them is a major challenge, says Bosch autonomous car chief Michael Fausten. In response, carmakers are joining forces: the demonstration at Ford's test track is actually technology Ford is developing with support from the European Union  and in conjunction with other carmakers, including BMW, Fiat, Daimler, Volvo, and Volkswagen, reports the BBC.

The consortium has tested its obstacle avoidance prototype at up to 60 km/h, which is twice the speed of the automatic braking demonstration I experienced at the Frankfurt Motor Show last month. Still, it's a long way from highway speeds. So for now drivers who are ready to let go of the wheel should probably stick to parking speeds, or at least parking lots.


Posted by E.Pomales P.E date 10/16/13 07:52 AM 07:52 AM  Category Energy & Petroleum Engineering

comments (0)  
0:1381928028:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Wednesday, October 16, 2013

New Optics Can Capture Wide Fields in Exquisite Detail   Print



Photo: UCSD Jacobs School of Engineering

David and Goliath: Fiber-coupled monocentric lens camera (left) next to the much larger Canon EOS 5D Mark III DSLR, used for conventional wide-angle imaging.

Compared to traditional camera lenses, the small, monocentric lens in the photo above doesn't look very impressive. The lens itself is a glass sphere the size of a jumbo school-yard marble. It’s held in a tear-drop shaped collet perched at the end of a machined rod. It looks very much like a crustacean’s eyestalk, and most photographers would instantly dismiss claims that this 20-mm-diameter, 16-gram crystal ball could outperform a highly engineered, 370g stack of compound lenses in the conventional Canon 12-mm fisheye lens next to it.

But the proof is in the pictures [below]. The lens has a 12-mm focal length, a wide, f/1.7 aperture, and can clearly image objects anywhere from half a meter to 500 meters away with 0.2 milliradian resolution (equivalent, an Optical Society of America release points out, to 20/10 human vision). And it can cover a 120° field of view with negligible chromatic aberration. Joseph E. Ford (leader of the University of California, San Diego’s Photonic Systems Integration Laboratory) debuted the new lens this week at the Frontiers in Optics Conference in Orlando, Florida. 

Monocentric lenses—simple spheres constructed of concentric hemispherical shells of different glasses—are not new. Researchers have pursued them for decades, only to be stymied by a series of technical obstacles. Image capture has been a big stumbling-block: A monocentric lens has a focal hemisphere, not a focal plane. Back in the 1960s, researchers started trying to use bundles of optical fibers to do the job, but the available fibers were not up to the task: if they were packed closely enough to capture the high-resolution images, light would bleed through the cladding, producing crosstalk between fibers and degrading the images.

Other alternatives, such as relay optics (essentially using an array of as many as 221 tiny "eyepiece” lenses to image small, overlapping parts of the focal hemisphere and project them onto planes that could be fitted together and re-assembled digitally) were complex and expensive.

Only recently have high-index optical fibers appeared with enough contrast between the fiber core and cladding to overcome cross-talk in very tight bundles. Ford, along with colleagues at UCSD and Distant Focus Corporation in Champaign, Ill., polished these tight bundles into concave hemispheres that matched the monocentric lens’s curvature—creating, in essence, a glass retina for this glass eye (image above). These bundles carry the hemispherical image onto a series of flat focal planes to form 12 to 20 non-overlapping images. There images are fitted together and there is some image processing to map the curved image onto a flat display—much as the surface of the globe is mapped onto a cylinder in a Mercator projection.

The project is funded by the Defense Advanced Research Projects Agency’sSCENICC (Soldier Centric Imaging via Computational Cameras) program. The utility of ultralight optics that combine wide fields with zoom-lens detail is obvious, whether the application is cell-phone cameras or battlefield goggles or (as we will see in a moment) environmental research.

Imaging the Ecosystem

Ecosystems operate on a tremendous range of scales, from the individual microbe to the continent, and they vary over space and time.

A research team from the U.S. Department of Agriculture’s Agricultural Research Service (ARS), Sweet Briar College, and the Carnegie Mellon University Robotics Institute has developed a method for seeing both the forest and the trees…along with the calendar.

And although monocentric lenses might eventually add to the technique, ARS’s Mary H. Nichols and her co-workers put their system together using off-the-shelf parts—principally a Canon G10 camera and robotic panoramic camera mount and photo-integration software from GigaPan.

With time-lapse programming and a solar power source, Nichols’s team set up the camera in the Walnut Gulch, Arizona, an experimental station that has been producing precipitation, runoff, and weather data for more than 50 years. To this data archive, the panoramic time-lapse camera added a full image of the hillside every two hours. Each image is the product of 28 separate exposures, with enough detail that viewers can see how the whole watershed "greens-up” after a rain and yet track the development of asingle cholla cactus growing way off to the left and up the slope.  (For fun, here are more zoomable time-lapse panoramas.)

Images: Joseph E. Ford/UCSD Jacobs School of Engineering; Mary H. Nichols/USDA-ARS




comments (0)  
0:1381927851:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Wednesday, October 16, 2013

“BigBrain” Project Makes Terabyte Map of a Human Brain   Print



For the first time ever a complete 3-D digital map of a post mortem human brain will be available online for neuroscientists and anyone who wants a better idea of what their grey matter really looks like. The new ultra-detailed model, consisting of a terabyte of data, is part of the European Human Brain Project, created in a joint effort by Canadian and German neuroscientists. With a resolution of 20 micrometers it’s the only model yet to go beyond the macroscopic level. At this degree of resolution cells 20 micrometers in diameter are visible. Although individual smaller cells can’t be seen, it’s possible to identify and analyze the distribution of cells into cortical areas and sub-layers. Previous brain mapping efforts had resolutions one-fiftieth as fine. 

"The whole point of such a modeling project is that you can then start to simulate what the brain does in normal development in children or in degeneration,” says Dr. Alan Evans, a professor of biomedical engineering at the McGill University, in Montreal. "If you wanted to look to Alzheimer’s Disease, you can examine how that brain might perform computationally in a computational model if you remove certain key structures or key connections.” 

Collecting images for the project involved slicing up the brain of a once healthy 65-year-old woman into over 7000 segments, each thinner than a human hair, and then digitizing the findings. This was an especially challenging task, because, once digitized, ruptures created in the slicing process had to be detected and then corrected to develop the final model; a task done both by large amount computer analysis and by manually shifting pieces of data to their proper locations. 

The BigBrain is just one of many large-scale brain mapping projects including President Obama’s recently proposed BRAIN InitiativePaul Allen’s Brain Atlas, and the Human Connectome Project. The BigBrain is the only one to provide a complete map of an individual brain. The Human Connectome Project and BRAIN Initiative focus more on brain activity. The latter will map the connections of small groups of neurons. The former compiles thousands of MRI images from 68 volunteers to map activity, look at how individual brains vary, and see which parts of the brain are involved in specific tasks. Paul Allen’s Brain Atlas focuses more on gene expression in the brain.

Obviously an in depth model of a single post mortem brain can’t really say much about brain activity nor can it account for slight variances in the structures of individual brains, says Dr. Katrin Amunts, a professor of structural functional brain mapping at Aachen University. Think of it as a general model into which data collected from in vivo brains can be put into context. 

The project is "a common basis for scientific discussions because everybody can work with this brain model and we speak about the same basic findings and we can develop new methodical aspects based on these common model of the human brain,” says Dr. Karl Zilles, a senior professor at the Jülich-Aachen Research Alliance.

BigBrain pushes the limits of today’s technology, as software doesn’t yet exist to place data from multiple brains into a single model at 20-micrometer resolution. A 1-micrometer model could take up 20 to 22 petabytes of data, an amount that no computer today would be able to process, according to Amunts. 

To view the BigBrain follow this link.

Photo: Amunts, Zilles, Evan et al.


Posted by E.Pomales P.E date 10/16/13 07:46 AM 07:46 AM  Category BIomedical

comments (0)  
0:1381927623:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Wednesday, October 16, 2013

A Global Alliance for Genomic Data Sharing   Print




In June, a group of 70 hospitals, research institutes, and technology companies from 40 countries formed the Global Alliance (pdf), a consortium to promote open standards and best practices for organizations producing, using, or sharing genomic and clinical data.

Created in response to the flood of genomic data generated by increasingly affordable gene sequencing technologies, the Global Alliance aims to foster an environment of widespread data sharing that is unencumbered by competing, proprietary standards, the likes of which have plagued electronic health records in the United States and elsewhere. For example, although analysis of individuals’ genomes already sees widespread application in the treatment of cancer, inherited disease, and infectious disease, it’s not always possible for researchers to achieve the sample sizes necessary to study rare conditions. This is due in part to the fact that hospitals cannot aggregate data stored in different hospital systems using unstandardized analytical tools and methods. By creating a standardized framework for sharing and using genomic data, the Global Alliance will enhance the opportunities for broader study of a range of diseases while also improving information sharing globally.

The group is modeled after the World Wide Web Consortium (W3C), a nonprofit community that serves as the de facto standards-setting organization for Web technologies. Like the W3C, the Global Alliance plans to secure funding through philanthropic support, grants from research agencies, and member dues.

The Global Alliance has seven core principles:

  • Respect: The Global Alliance will respect the right of individuals to release some or none of their genomic data.
  • Transparency: The Global Alliance will employ transparent management and operating practices.
  • Accountability: The Global Alliance will develop and disseminate best practices for the technology, ethics, and public outreach behind genomic and clinical data sharing.
  • Inclusivity: The Global Alliance will foster partnerships among genomic data stakeholders.
  • Collaboration: Global Alliance members will share data to advance human health.
  • Innovation: The Global Alliance will promote technological advances to accelerate scientific and clinical progress.
  • Agility: The Global Alliance will act quickly to keep pace with rapidly changing technology.

The core principle of innovation is stressed repeatedly in the refreshingly specific section on technological considerations. The Global Alliance strongly advocates a cloud-based data archiving platform in order to minimize data storage costs across member organizations. It also urges gene sequencing technology makers to ensure that their products are compatible with Hadoopand Spark (a Hadoop alternative optimized for real-time and memory-intensive applications) to enable efficient, massively-parallel computation. In addition, the group calls for the creation of an application programming interface (API) that will allow developers to query the system and develop their own applications employing the data.

One major challenge for the alliance will be the broadly differing international attitudes on sharing personal data. According to a 2010 survey from the European Commission (pdf), public opinion regarding the sharing of medical data for research varied widely at the national level. In Sweden and Norway, for example, 82 percent of poll respondents said they would be willing to provide such data; in relatively nearby Latvia and Lithuania, only 31 percent and 41 percent, respectively, would be willing to share their data.

In addition, it is not clear at this stage how the Global Alliance will work with existing genomic data standardization efforts such as those undertaken by the international health standards organization Health Level Seven. The issues of competing standards and global attitudes toward data sharing will need to be worked out to ensure the success of the Alliance.  

The alliance, though still in the early stages, is nevertheless an ambitious project, and one that could be widely influential if its international members can resolve these sensitive data ethics and standards regulation issues.

Travis Korte is a Research Analyst with the Information Technology and Innovation Foundation.

Photo: Alan John Lander Phillips/Getty Images


Posted by E.Pomales P.E date 10/16/13 07:45 AM 07:45 AM  Category BIomedical

comments (0)  
0:1381927545:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Wednesday, October 16, 2013

Bionic Skin for a Cyborg You   Print




One decade ago, my research groupat the University of Tokyo created a flexible electronic mesh and wrapped it around the mechanical bones of a robotic hand. We had dreamed of making an electronic skin, embedded with temperature and pressure sensors, that could be worn by a robot. If a robotic health aide shook hands with a human patient, we thought, this sensor-clad e-skin would be able to measure some of the person’s vital signs at the same time.

Today we’re still working intensively on e-skin, but our focus is now on applying it directly to the human body. Such a bionic skin could be used to monitor medical conditions or to provide more sensitive and lifelike prosthetics.

But whether we’re building e-skin for robots or people, the underlying technological challenges are the same. Today’s rigid electronics aren’t a good fit with soft human bodies. Creating an electronic skin that can curve around an elbow or a knee requires a thin material that can flex and even stretch without destroying its conductive properties. We need to be able to create large sheets of this stuff and embed it with enough sensors to mimic, at least roughly, the sensitivity of human skin, and we need to do it economically. That’s a tall order, and we’re not there yet. But ultimately, I think engineers will succeed in making e-skins that give people some amazing new abilities.

The first step in making e-skins that can bend around a joint is figuring out how to provide electronics with better mechanical flexibility. Modern integrated circuits, including the microprocessors inside computers and the thin-film transistors behind display screens, are manufactured on rigid substrates like silicon and glass. So the things built with these chips—laptops, flat-panel TVs, and the like—are rigid too.

Manufacturers have already commercialized flexible circuit boards for those passive components that are mechanically flexible, such as wiring. But rigid elements like silicon chips and chip capacitors are still attached to these flexible boards. To make an e-skin, we need greater flexibility: Not only the wiring but also the substrate and all the circuitry must be bendable. We need electronics that can be rolled up, folded, crumpled, and stretched.

Thin-film transistors will be one of the key elements in this electronics revolution. These TFTs can be made of various kinds of semiconductor materials that can be deposited in thin layers, such as amorphous silicon, low-temperature polycrystalline silicon, organic semiconductors, and carbon nanotubes. And there is a range of materials that can serve as flexible substrates for TFTs, such as ultrathin glass, stainless steel foils, and plastic films.

After much experimentation, my group has concluded that plastic films are very promising. They’re rugged and hold up well against mechanical strain, they cost very little, and they’re compatible with new manufacturing processes that can produce large, flexible sheets of electronic materials—including roll-to-roll manufacturing methods now being developed. To print TFTs on a plastic film, you need to keep the processing temperature low enough to prevent the plastic from changing its shape. TFTs made with organic semiconductors seem promising in that regard, because they can be printed at room temperature.

Thin-film transistors don’t just allow electronics to be flexible—they can also help an e-skin mimic the sensitivity of real skin. Consider this: There are more than 2 million pain receptors in a person’s skin, which is equivalent to the number of pixels found in a typical high-definition TV. A major obstacle we faced in developing an e-skin was figuring out how many sensors could be integrated into electronic sheets. You can’t wire 2 million sensors directly to the driver circuits that control them, because this would mean cramming 2 million contact pads onto a silicon chip.

Our solution was to do exactly what display manufacturers do to control the transistors in their TV screens. They use wiring layouts that allow the CPU to send commands to the transistors attached to individual pixels based on where they lie in a big conductive grid. Using column and row numbers to specify the pixel’s address reduces the number of connections necessary. A similar "active matrix” strategy can be used in e-skins with millions of embedded sensors.


Posted by E.Pomales P.E date 10/16/13 07:40 AM 07:40 AM  Category BIomedical

comments (0)  
0:1381927345:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Wednesday, October 16, 2013

Teeny Tiny Pacemaker Fits Inside the Heart   Print




A tiny pacemaker that doesn't need wires to stimulate the heart has been approved for sale in the European Union. It's the world's first wireless pacemaker to hit the market. This device, which is about the size and shape of a AAA battery, is designed to be inserted into the heart in a non-invasive procedure that would take about a half-hour. 

The device was developed by a secretive California startup called Nanostim,which was just acquired by the biomedical device company St. Jude Medical. The company will have to do more clinical trials before the device can be submitted for approval to the U.S. Food and Drug Administration. 

Today's pacemakers are already pretty small—about the size of three poker chips stacked up—but to insert one a surgeon has to cut open a patient to install the device near the heart, and then connect the wires, called leads, to provide electrical stimulation to the heart muscle. Those leads are often the source of the problem when pacemakers fail. The tiny wires can fracture or move as the heart beats continuously, and St. Jude has had several pacemakers recalled as a result of faulty leads.

The Nanostim device is put in place via a steerable catheter that's inserted into the femoral artery. The tiny pacemaker is attached to the inside of a heart chamber, where it can directly stimulate the muscle. The animation below (no audio) demonstrates the insertion procedure.

 

St. Jude says the pacemaker's battery should last for 9 to 13 years, and says that the pacemaker can be removed and replaced in a similar procedure to the insertion. 

The market for such a device is large: More than 4 million people worldwide now have a pacemaker or a similar device to manage their cardiac rhythms, and 700 000 new patients receive such devices each year. 

Image and animation: St. Jude Medical



Posted by E.Pomales P.E date 10/16/13 07:35 AM 07:35 AM  Category BIomedical

comments (0) trackback URL (0)  
0:1381927179:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 11, 2013

Silicon and Graphene: Two Great Materials That Stay Great Together   Print




The use of graphene as a transparent conducting film has been hotly pursuedof late, in large part because it offers a potentially cheaper alternative to indium tin oxide (ITO) where a bottleneck of supply seems to be looming.

It has not been clear whether photovoltaic manufacturers have taken any interest in graphene as an alternative for transparent conducting films. This lack of interest may in part be the result of there being little research into whether graphene maintains its attractive characteristic of high carrier mobilitywhen used in conjunction with silicon.

Now researchers at the Helmholtz Zentrum Berlin (HZB) Institute in Germany have shown that graphene does not lose its impressive conductivity characteristics even when mated with silicon.

"We examined how graphene's conductive properties change if it is incorporated into a stack of layers similar to a silicon based thin film solar cell and were surprised to find that these properties actually change very little," said Marc Gluba of the HZB Institute for Silicon Photovoltaics in a press release.

The research, which was published in the journal Applied Physics Letters("Embedded graphene for large-area silicon-based devices”), used the method of growing the graphene by chemical vapor deposition on a copper sheet and then transferring it to a glass substrate. This was then covered with a thin film of silicon.

The researchers experimented with two different forms of silicon commonly used in thin-film technologies: amorphous silicon and polycrystalline silicon. In both cases, despite completely different morphology of the silicon, the graphene was still detectable.

"That's something we didn't expect to find, but our results demonstrate that graphene remains graphene even if it is coated with silicon," said Norbert Nickel, another researcher on the project, in a press release.

In their measurements, the researchers determined that the carrier mobility of the graphene layer was roughly 30 times greater than that of conventional zinc oxide-based contact layers.

Although the researchers concede that connecting the graphene-based contact layer to external contacts is difficult, it has garnered the interest of their thin-film technology colleagues. "Our thin film technology colleagues are already pricking up their ears and wanting to incorporate it,” Nickel adds.


Posted by E.Pomales P.E date 10/11/13 07:10 AM 07:10 AM  Category Chemical Engineering

comments (0)  
4:1381493485:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 11, 2013

Agricultural Phosphorus Recovery   Print



Oct. 9, 2013 — Phosphorus is an elemental nutrient in agriculture. In response to the increasing demand for phosphorus in the food, biofuels and biobased materials industries, global consumption of phosphate has risen significantly and will continue to increase. In 2008, approximately 1.4 million tonnes of phosphorus were consumed for the production of synthetic phosphate fertilizer. Moreover, phosphate rock reserves are non-renewable and controlled by only a few countries such as China, Morocco, Tunisia and the U.S.A. As a result, Europe is completely dependent on imports from these countries to cover phosphorus demand.

Besides non-renewable reserves, alternative phosphate resources include municipal wastewater and agricultural organic residues such as livestock manure or digestate from biogas plants. Although new technologies have already been developed for the recovery of dissolved inorganic phosphates in the liquid fractions of municipal and agricultural wastes, solid residues remain a largely untapped source for phosphorus in its organic form. In solid fractions, organic phosphorus bound in biochemical molecules such as phospholipids, nucleotides and nucleic acids offer a bountiful source of phosphorus.

These agricultural residues represent a huge additional reservoir for phosphate recovery: Annually, more than 1,800 million tonnes of manure are generated in the EU and the amount of digestion residues is still increasing. Especially in swine and poultry manure, up to 50 per cent of the overall phosphorus is present in the organic form. In the PhosFarm project, this organic residual matter is to be made accessible as a valuable phosphate resource. The project consortium coordinated by the Fraunhofer Institute for Interfacial Engineering and Biotechnology IGB wants to develop a process and realise a pilot plant that features a controlled enzymatic release of organically bound phosphate, enabling up to 90 per cent recovery of total phosphorus.


This novel strategy is to be carried out using phosphate hydrolysing enzymes immobilised onto suited carriers. "In preliminary experiments, we could show that these enzymes can release inorganic phosphate from model compounds," explains Jennifer Bilbao, who manages the project at the Fraunhofer IGB. After separation of the solid fraction, the released phosphate dissolved in the liquid fraction can be precipitated as magnesium ammonium phosphate and calcium phosphate, which in turn are directly usable as high value fertilising salts.


The remaining dewatered solid phase is dried with an energy efficient drying process operating with superheated steam instead of hot air. The generated organic soil amendment substrate helps to improve soil fertility. Moreover, according to the requirements of crop species and depending on the soil conditions, the organic soil improvers can be mixed with the recovered mineral fertiliser salts to a suited nutrient composition with a defined N/P ratio.


Bilbao describes the advantages of the envisioned concept: "With our mineral fertiliser salt and organic soil improver products, synthetic phosphate fertilisers are saved and overfertilisation from the application of livestock manure on the agricultural fields is prevented. This realisation of efficient phosphorus recovery not only generates valuable products from an otherwise wasted residue, but at the same time achieves environmentally friendly closed loop recycling.



Posted by E.Pomales P.E date 10/11/13 07:07 AM 07:07 AM  Category Agricultural Engineers

comments (0)  
0:1381493347:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 11, 2013

Former NRC Chairman Says U.S. Nuclear Industry is "Going Away"   Print



Gregory Jaczko, who was chairman of the U.S. Nuclear Regulatory Commission at the time of the Fukushima Daiichi accident, didn't mince words in an interview with IEEE Spectrum. The United States is turning away from nuclear power, he said, and he expects the rest of the world to eventually do the same. 

"I’ve never seen a movie that’s set 200 years in the future and the planet is being powered by fission reactors—that’s nobody’s vision of the future," he said. "This is not a future technology. It’s an old technology, and it serves a useful purpose. But that purpose is running its course."

Jaczko bases his assessment of the U.S. nuclear industry on a simple reading of the calendar. The 104 commercial nuclear reactors in the United States are aging, and he thinks that even those nuclear power stations that have received 20 year license extensions, allowing them to operate until they're 60 years old, may not see out that term. Jaczko said the economics of nuclear reactors are increasingly difficult, as the expense of repairs and upgrades makes nuclear power less competitive than cheap natural gas. He added that Entergy's recent decision to close the Vermont Yankee plant was a case in point.

"The industry is going away," he said bluntly. "Four reactors are being built, but there’s absolutely no money and no desire to finance more plants than that. So in 20 or 30 years we’re going to have very few nuclear power plants in this country—that’s just a fact." 

Jaczko spoke to IEEE Spectrum following his participation in an anti-nuclear event in New York City at which speakers discussed the lessons that could be learned from the Fukushima Daiichi accident. Speakers also included former Japanese prime minister Naoto Kan, who headed the government during the Fukushima accident, and Ralph Nader. Several speakers talked about New York's Indian Point nuclear power station, and Jaczko expressed his personal opinion that the plant should be shut down. 

Jaczko argued that more Fukushima-type accidents are inevitable if the world continues to rely on the current types of nuclear fission reactors, and he believes that society will not accept nuclear power on that condition. "For nuclear power plants to be considered safe, they should not produce accidents like this," he said. "By 'should not' I don’t mean that they have a low probability, but simply that they should not be able to produce accidents like this [at all]. That is what the public has said quite clearly. That is what we need as a new safety standard for nuclear power going forward." He acknowledged that new reactor designs such as small modular nuclear reactors and someGeneration IV reactor designs could conceivably meet such a safety standard, but he didn't sound enthusiastic. 


Posted by E.Pomales P.E date 10/11/13 07:06 AM 07:06 AM  Category General

comments (0)  
0:1381493189:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 11, 2013

Chinese Internet Rocked by Cyberattack   Print



China’s Internet infrastructure was temporarily rocked by a distributed denial of service attack that began at about 2 a.m. local time on Sunday and lasted for roughly four hours. The incident, which was initially reported by the China Internet Network Information Center (CNNIC), a government-linked agency, is being called the "largest ever” cyberattack targeting websites using the country’s .cn URL extension. Though details about the number of affected users have been hard to come by, CNNIC apologized to users for the outage, saying that "the resolution of some websites was affected, leading visits to become slow or interrupted.” The best explanation offered so far is that the attacks crippled a database that converts a website’s URL into the series of numbers (its IP address) that servers and other computers read. The entire .cn network wasn’t felled because some Internet service providers store their own copies of these databases.

Wall Street Journal report notes that the attack made a serious dent in Chinese Web traffic. Matthew Prince, CEO of Internet security firm CloudFlare told the WSJ that his company observed a 32 percent drop in traffic on Chinese domains. But Prince was quick to note that although the attack affected a large swath of the country, the entity behind it was probably not another country. "I don’t know how big the ‘pipes’ of .cn are,” Prince told the Wall Street Journal, "but it is not necessarily correct to infer that the attacker in this case had a significant amount of technical sophistication or resources. It may have well have been a single individual.”

That reasoning stands in stark contrast to the standard China-blaming reaction to attacks on U.S. and Western European Internet resources or the theft of information stored on computers in those regions. In the immediate aftermath of the incident, there was an air of schadenfreude from some observers. Bill Brenner of cloud-service provider Akami told the Wall Street Journal that "the event was particularly ironic considering that China is responsible for the majority of the world’s online ‘attack traffic.’” Brenner pointed to Akami’s 2013 ‘State of the Internet’ report, which noted that 34 percent of global attacks originated from China, with the U.S. coming third with 8.3 percent.

For its part, the CNNIC, rather than pointing fingers, said it will be working with the Chinese Ministry of Industry and Information Technology to shore up the nation’s Internet "service capabilities.”

Photo: Ng Han Guan/AP Photo


Posted by E.Pomales P.E date 10/11/13 07:04 AM 07:04 AM  Category Communications Engineering

comments (0)  
0:1381493069:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 11, 2013

This Week in Cybercrime: Companies to FTC: Your Data Security Reach Exceeds Your Grasp   Print



The U.S. Federal Trade Commission is wrong to claim broad authority to seek sanctions against companies for data breaches when it has no clearly defined data security standards, said panelists at a forum sponsored by Tech Freedom, a Washington, D.C., think tank that regularly rails against government regulation.

The event, held on Thursday, coalesced around the fact that in the last decade, the FTC has settled nearly four dozen cases after filing complaints based on its reasoning that a failure to have sufficient data security constitutes an unfair or deceptive trade practice. Two pending court cases, says a Tech Freedom statement, "may finally allow the courts to rule on the legal validity of what the FTC calls its 'common law of settlements.'"

One of the FTC critics speaking at the forum was Mike Daugherty, CEO of Atlanta-based diagnostic lab LabMD. The company is currently in the agency’s crosshairs but is fighting back. According to FTC, somehow, a spreadsheet in LabMD's possession containing Social Security numbers, dates of birth, health insurance provider information, and standardized medical treatment codes and other information for more than 9000 patients ended up on a peer-to-peer file-sharing network in 2008. That and another LabMD security lapse wherein 500 customer records were lost to identity thieves last year triggered the agency to file a complaint.

Those facts notwithstanding, the company maintains that the complaint wasn't based on established rules. According to a Computer World article, Daugherty said use of Section 5 of the FTC Act, which allows the agency to take action to prevent or punish unfair or deceptive business practices, is a huge overreach. "If you want to upset [FTC officials], ask them what the standards are," Daugherty said. He incredulously asked, "You mean you can make them up as you go along?”

The forum participants agreed that, the U.S. Congress needs to step in and pass legislation that gives the FTC or some other federal agency a specific mandate for such action and rules to follow. What becomes of that argument may be determined by the outcome of the upcoming court case.

Bruce Schneier on Combating the NSA’s War on Data Security

Bruce Schneier, internationally renowned security technologist and author of the influential newsletter "Crypto-Gram" and the blog "Schneier on Security," sits down for a conversation about revelations of the NSA’s efforts to subvert and weaken cryptographic algorithms, security products, and standards. In the podcast, Schneier, author of books including Liars and Outliers: Enabling the Trust Society Needs to Survive, talks about what it will take to help defeat the capabilities the NSA has developed. The NSA isn’t even doing it through sleuthing or some ultra-advanced mathematical techniques. It’s mainly setting up agreements with software vendors who deliberately weaken security protocols such as SSL and VPNs in a way known only to the NSA.

Why would a company acquiesce to the government in this way? Schneier says that the NSA can ask nicely (while holding a club in its hand in the form of threats to withhold government contracts). It can also force a firm to play ball by sending it a National Security Letter demanding cooperation as well as the company’s silence about what it is being told to do to its unsuspecting customers. And the agency is not above placing a covert agent inside a company to surreptitiously weaken products. "It validates all the paranoia,” Schneier said. "We now can’t trust anything. It’s possible that they’ve done this to only half the protocols on the Internet. But which half? How do you know? You don’t. If a company says, ‘It’s not us,’ you can’t trust them. The CEO might not know [if its cryptography has been weakened by the NSA].”

Pwn2Own Part II: The Researchers Hack Back

HP TippingPoint, whose ZDI bug bounty program pays researchers to spot vulnerabilities so it can do an even better job of protecting customers against as-yet-unpatched security holes, is yet again putting its money where its mouth is. It announced this week on its company blog that it and its co-sponsors, Google and BlackBerry, are putting up US $300 000 in prize money for a hacking contest challenging researchers to demonstrate successful attacks against mobile services and browsers. The Mobile Pwn2Own contestwill take place in Tokyo on 13 and 14 November. The first researcher or team to hack a phone's baseband processor will walk away with $100,000.  Thecontest’s rules require that researchers disclose details of the vulnerabilities they leveraged as well as the exploit techniques used to hack the device, service or operating system.

Big money will still be available for hacking a mobile browser ($40,000, but $50,000 for Chrome on Android running on a Nexus 4 or Samsung Galaxy SO); a mobile operating system ($40,000); a message service such as SMS ($70,000); or a short-distance linking technology, like Bluetooth or NFC ($50,000).”

The researchers can chose the wall they’ll attempt to scale; the list of eligible devices to be picked apart includes Apple's iPhone 5 and iPad Mini, Google's Nexus 4 smartphone and Nexus 7 tablet, Nokia's Lumia 1020, and Samsung's Galaxy S4 smartphone.

Contractor Steals Data on 2 Million Vodafone Customers

German police and security experts have informed Vodafone customers that a contractor accessed a database inside the telecom giant’s network and made off with customer names, addresses, birth dates and bank account numbers among other personal data for as many as two million customers. Ouch. Though the authorities have a suspect in custody, that provides no assurances about who has gained access to the data and what plans they have for it.

Kaspersky Threatpost article notes that "Vodafone delayed disclosing the breach in order to give authorities time to investigate.” Meanwhile, Vodafone released a statement describing the activities it has been engaged in subsequent to the horse leaving the barn. Administrators’ passwords have been changed, digital certificates updated, and the server from which the data was pilfered wiped, the company said.

And In Other Cybercrime News…

Is Cybercrime in Russia Actually Declining

E-Mail Spam Campaign Spreads Android Malware to Smartphones

Twelve Arrested in Plot to Rob London Bank Remotely Using KVM Device Installed on a Computer at a Local Branch


Posted by E.Pomales P.E date 10/11/13 07:02 AM 07:02 AM  Category Communications Engineering

comments (0)  
0:1381493006:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 11, 2013

Real-Time Sensor Double Checks What's in Your IV Drip   Print



Studies show that errors in intravenous drug delivery are common in hospitals and other healthcare facilities. Some errors do not have serious consequences, but others cause major harm, including deaths, and health professionals are always looking for ways to improve dispensing workflows. For example, computerized systems have now advanced IV delivery by administering set volumes of medication to a patient. These systems cannot identify a medication, though, or check its concentration as it is given to a patient. But a new optical device can.

Students at University of Illinois at Urbana-Champaign (UIUC), led by Brian Cunningham, who runs the Micro and Nanotechnology Laboratory at UIUC, used an extremely responsive nanoscale sensing technique called surface-enhanced raman spectroscopy (SERS) to identify the drugs in IV solutions in real-time, meaning the sensor could potentially be used to check IV medications immediately before they are administered to patients. SERS bounces photons off of molecules and measures the photons' post-collision frequency and wavelength to determine the chemical makeup of the molecules themselves.

The students lined IV tubing with a gold surface outfitted with tiny bumps called nano-domes. Then they shone lasers on the gold tube interior, and as IV fluid flowed through the tubes, drug molecules that touched the domes were identified using SERS. The nanostructure surface developed for this project can be deposited on flexible plastic at a low cost because its manufacturing is automated by a very accurate replica molding process.

The detector can currently identify drugs like morphine, methadone, and phenobarbital, and should be highly expandable to an extensive catalogue because of its sensitivity. Additionally, the system can currently identify combinations of two drugs. The goal, though, is for it to be able to handle 10, because a major area of concern in healthcare settings is mistakenly combining drugs with harmful interactions. 

"Up to 61 percent of all life-threatening errors during hospitalization are associated with IV drug therapy," Cunningham said in a press release, citing arecent report. "So for all the really good things hospitals can do, the data shows that mistakes can occasionally happen."

Image: Hsin-Yu Wu/University of Illinois


Posted by E.Pomales P.E date 10/11/13 07:00 AM 07:00 AM  Category Biotechnology Engineering

comments (0)  
0:1381492904:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 11, 2013

Hiding Data in a Heartbeat   Print




Elderly patients or those with chronic diseases are increasingly able to monitor their condition from home or other convenient locations because their vital signs and test results can be sent over the Internet to their physicians. But as this becomes standard practice, patient confidentiality is an increasing concern. At RMIT University, in Melbourne, Australia, scientists are working on a way to hide a patient’s private data in plain sight.

The technique, published last month in IEEE Transactions on Biomedical Engineering, uses steganography, the practice of embedding secret information inside a larger bit of innocuous data without noticeably affecting the size or character of the larger data. (Steganography—such as hiding a message in an image file—was famously used by a Russian spy ring caught in the United States in 2010.)

In the research, the technology conceals identifying patient information so it can only be accessed by healthcare workers who have the correct credentials. Its inventors demonstrated it using electrocardiogram (ECG) signals but the researchers hope that it will be applicable for use with various medical monitoring devices.

"It can hide a picture of a person, it can hide personal details of the person, and it can also contain information about who can look at the ECG,” says Ibrahim Khalil, a computer scientist at RMIT and one of the study’s two authors.

Khalil points out that ECG lent itself to developing the steganographic trick because it produces a lot of data every second. A heart monitor’s readouts consume a lot of computing resources, but that makes it a good haystack in which to hide a needle of data; even a few seconds worth is enough to hide patient information.  

Previous research has focused on using cryptographic algorithms to keep patient information confidential when sending physiological signals. But these strategies have significant computational overhead because both the physiological signal and the identifying information must be encrypted on one end and decrypted on the other. Steganography is less computationally intensive because only the hidden data is encrypted.

Another important aspect of the technology demonstrated by the RMIT researchers is that the steganographic process does not distort the ECG data. This is significant because shielding a patient’s identity should not come at the expense of an accurate diagnosis.

After the private data is encrypted, the ECG signal is broken down into frequency sub-bands, some carrying the meaningful data of the ECG and some carrying noise. A mathematical model identifies the different sub-bands and embeds the encrypted personal data in the noise bands. To embed the data securely, the model calls for two types of encryption. One relies on a key that the sender and the recipient both know. The other is based on a uniquely generated matrix that scrambles a key stored by both the sender’s and recipient’s computers. Once the data has been sent, the recipient’s device must have the shared key, the scrambling matrix, and information about how the data was broken down into sub-bands in order to even prompt healthcare personnel for their credentials.

Going forward, Khalil and coauthor Ayman Ibaida, a doctoral candidate in computer science at RMIT, would like to see their technique implemented in industry. They are also looking to bring their mathematical model to other biomedical signals. It seems like the process could be implemented fairly easily, but in practice it may be years before steganographic techniques can be incorporated into medical monitoring systems.

"The challenge was how do you make it really difficult for people to break. We did mathematical analysis to prove that it is secure and it is almost unbreakable,” Khalil says. "The way we achieve it without decrypting or increasing the size of the data, I think that’s a big plus for us.”

To find out how to hide messages in Skype calls, WiFi networks, Google Suggest, and Bit Torrent see "4 New Ways to Smuggle Data Across the Internet” in the November 2013 issue.


Posted by E.Pomales P.E date 10/11/13 06:59 AM 06:59 AM  Category Biotechnology Engineering

comments (0)  
0:1381492803:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 11, 2013

Excessive security will kill the Internet of Things   Print



October 11, 2013 // Nick Flaherty

Excessive security will kill the Internet of Things

Excessive security will kill the emerging Internet of Things warns the author of a new report.


"There’s a certain amount of panic around the Internet of Things, that you need maximum security or not do it at all,” said Prof. Jon Howes, Technology Director at Cambridge-based Beecham Research and author of a new report on machine to machine (M2M) security. "The problem with M2M is the business model is always tight so as we put tighter and tighter security in place it will kill the business model and make it uneconomic.”
The key is the end-to-end architecture, he says. This is even more important with the recent launch of low cost microcontrollers such as Silicon Labs’ 49¢ Zero Gecko that include an AES encryption engine to support security applications. 
In recent surveys, Beecham Research has identified end-to-end solution security as the leading concern for solution providers and for adopters of new M2M solutions. "Having analyzed the reality of security for M2M solutions, we have discovered emerging new principles and ways to make security both sufficient and economically viable. There is a profusion of new elements of security and new business models that will enable new markets," said Howes. "Engaged with correctly, this can at long last change security into a value producing capability instead of a resented cost."
This is not about just big players or vertical integration, he says. "It’s not necessarily big companies but people that know how to implement the end to end solution in the right environment,” he said. "Smart meters in the UK need a different architecture from smart meters in Germany, for example. A collection of partnerships is the way it’s going to go, and security risk and threat consultants will have a considerable role to play.” This will generate a $700m in security solutions for M2M by 2018 he predicts in "Issues and Business Opportunities in Security for M2M Solutions."
The report identifies the importance of right-sizing security for M2M solutions so as "not to kill the M2M patient" with an over-enthusiastic interpretation of the threats and risks involved. The report examines the widely varying degrees of security required for M2M solutions in different verticals and cautions a careful review of risk tolerance to avoid adopting overly strict security policies at the expense of the economic viability of the total solution.


Posted by E.Pomales P.E date 10/11/13 06:57 AM 06:57 AM  Category Computer Engineers

comments (0)  
0:1381492688:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 11, 2013

8-bit MCUs feature high-accuracy oscillator circuit to control rotation in single-phase fan motors   Print



8-bit MCUs feature high-accuracy oscillator circuit to control rotation in single-phase fan motors

8-bit microcontrollers equipped with an oscillator circuit that provides more than twice the accuracy compared with previous models has been developed by LAPIS 


The ML610Q101 and ML610Q102 utilize motor control technology optimized for controlling rotation in single-phase fan motors. In addition, the industry-leading high-accuracy oscillator circuit reduces rotational variations, while multiple timers minimize noise. A hysteresis differential comparator is also built in that allows a Hall element to be used, contributing to lower costs. Additional features include a compact 16pin package 4 mm X 4 mm, ensuring compatibility with small fans, and high noise immunity (cleared the ±30 kV level in IEC61000-4-2 noise testing), making them ideal for applications with demanding noise requirements, such as industrial equipment, home appliances, single-phase fan motors, LED lights, battery charge controllers.

LAPIS Semiconductor has developed an integrated oscillator circuit and a regulator for logic power supply that does not require external capacitance, providing a two-fold improvement in oscillator accuracy compared with conventional models – the highest in the industry. This minimizes rotational variations with single-phase fan motor rotation control, while numerous timers provide quiet, high efficiency performance through soft switching*3 operation. As a result, the number of parts required can be reduced, since there is no need to add an external oscillator or capacitors for the logic power supply regulator. Plus, the compact package enables smaller mounting boards to be used.

Circuit and layout have enabled high noise immunity, making it possible to clear the ±30 kV level, which exceeds the measurement limit of Class 4 (±15 kV), the highest class defined by the IEC61000-4-2 standard established by the IEC (International Electrotechnical Commission). This makes the LSIs suitable for use in industrial equipment with stringent noise requirements.

A complete support system is provided, including an ML610Q102 reference board and software development environment that make it easy to begin evaluation of the ML610Q101 and ML610Q102


Posted by E.Pomales P.E date 10/11/13 06:53 AM 06:53 AM  Category Computer Engineers

comments (0)  
0:1381492593:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Tuesday, October 8, 2013

Better outcome in case of major bleed for patients taking the anticoagulant Pradaxacompared to warfarin   Print



Anticoagulants are an indispensable treatment to prevent dangerous blood clots that can cause devastating ischaemic strokes in patients with atrial fibrillation or life-threatening pulmonary embolism in patients with venous thromboembolism.(1,2) An increased risk of bleeding is a known possible complication of all anticoagulant therapies.(3) This research shows that applying existing management strategies in case of a major bleed with Pradaxa® compared to a major bleed with warfarin resulted in better outcomes even without the availability of a specific antidote.(4)

A post-hoc analysis of five Phase III trials compared the management and outcomes of a major bleeding event in patients taking Pradaxa® (dabigatran etexilate) with major bleeding events in patients taking warfarin. The results are now published online in Circulation. The analysis showed that the 30-day mortality (death within one month) related to a major bleeding event was significantly lower with Pradaxa® than with warfarin in atrial fibrillation patients requiring long-term treatment in the RE-LY® trial. In addition, Pradaxa® treated patients could leave the Intensive Care Unit faster than warfarin treated patients.

When major bleeding did occur in the trials analysed, the patients were managed using standard strategies and treatment options currently available in the clinical setting, both for Pradaxa® and warfarin.(4) This better outcome in case of a major bleed, even in the absence of a specific antidote, provides crucial support for the positive benefit-risk profile of Pradaxa®.

"We found that atrial fibrillation patients, who had a major bleed during therapy, actually had better outcomes if they took dabigatran than if they took warfarin," said Prof. Sam Schulman, Division of Hematology and Thromboembolism, McMaster University, Hamilton, Canada. "It is reassuring to see that existing standard strategies, such as stopping the drug and replacing blood, work just as well with dabigatran treatment as they do with warfarin treatment – if not even better."

The analysis also found that patients who had a major bleeding event during Pradaxa® treatment were usually at higher baseline risk compared to patients with major bleeding events on warfarin - Pradaxa® patients were older, had worse renal function and more often used concomitant treatment with aspirin or non-steroid anti-inflammatory agents.(4)

The pooled post-hoc analysis featured data from five large long-term Phase III trials including the pivotal RE-LY® trial comparing Pradaxa® with warfarin for stroke prevention in non-valvular atrial fibrillation and trials in acute treatment / secondary prevention of venous thromboembolism (VTE). The trials had durations of six to 36 months and included 27,419 patients. Key results are:(4)

  • Lower 30-day mortality in case of a first major bleed in the combined Pradaxa® treatment group compared to the warfarin group in atrial fibrillation patients (odds ratio 0.56, p=0.009)
  • When outcomes for all indications were combined and adjusted, the data showed a strong trend to lower mortality for Pradaxa® compared to warfarin (odds ratio 0.66, p=0.051)
  • One day shorter stay in the intensive care unit required for Pradaxa® patients (mean 1.6 nights) compared with warfarin patients (mean 2.7 nights; p=0.01)
  • Most major bleeds were managed mainly with supportive care using standard clinical measures. The most common measures used were blood transfusions and plasma transfusions.

"These findings provide important and reassuring insights for both physicians and patients", commented Professor Klaus Dugi, Corporate Senior Vice President Medicine, Boehringer Ingelheim. "They demonstrate that even in the absence of a specific antidote, when existing standard strategies are used, patients can expect a better outcome with Pradaxa® than with warfarin should a major bleed occur."

The favourable benefit-risk profile of Pradaxa® is supported by safety assessments from regulatory authorities including the European Medicines Agency and the U.S. Food and Drug Administration (FDA).(5,6) The most recent FDA update reports the results of a Mini-Sentinel assessment that indicated bleeding rates associated with new use of Pradaxa® are not higher than those associated with new use of warfarin. Specifically, for intracranial haemorrhage and gastrointestinal haemorrhage, the combined incidence rate (per 100,000 days at risk) was 1.8 to 2.6 times higher for new users of warfarin than for new users of Pradaxa®.(6)

Pradaxa® is already widely approved for stroke prevention in atrial fibrillation and for primary prevention of VTE following total hip replacement or total knee replacement surgery.7 The extensive in-market experience of over 2 million patient-years in all licensed indications puts Pradaxa® first among the novel oral anticoagulants.(8)

About Pradaxa® (dabigatran etexilate)
Pradaxa® is approved in over 100 countries worldwide.(8) It is licensed for the prevention of stroke and systemic embolism in patients with non-valvular atrial fibrillation and for the primary prevention of venous thromboembolism in patients undergoing total hip replacement or total knee replacement surgery.(7)

Pradaxa®, a direct thrombin inhibitor (DTI),(9) was the first of a new generation of direct oral anticoagulants targeting a high unmet medical need in the prevention and treatment of acute and chronic thromboembolic diseases.

Potent antithrombotic effects are achieved with direct thrombin inhibitors by specifically blocking the activity of thrombin (both free and clot-bound), the central enzyme in the process responsible for clot (thrombus) formation. In contrast to vitamin-K antagonists, which variably act via different coagulation factors, dabigatran etexilate provides effective, predictable and consistent anticoagulation with a low potential for drug-drug interactions and no drug-food interactions, without the need for routine coagulation monitoring or dose adjustment.

About Boehringer Ingelheim
The Boehringer Ingelheim group is one of the world's 20 leading pharmaceutical companies. Headquartered in Ingelheim, Germany, it operates globally with 140 affiliates and more than 46,000 employees. Since it was founded in 1885, the family-owned company has been committed to researching, developing, manufacturing and marketing novel medications of high therapeutic value for human and veterinary medicine.

As a central element of its culture, Boehringer Ingelheim pledges to act socially responsible. Involvement in social projects, caring for employees and their families, and providing equal opportunities for all employees form the foundation of the global operations. Mutual cooperation and respect, as well as environmental protection and sustainability are intrinsic factors in all of Boehringer Ingelheim's endeavors.

In 2012, Boehringer Ingelheim achieved net sales of about 14.7 billion euro. R&D expenditure in the business area Prescription Medicines corresponds to 22.5% of its net sales.

1. Marini C, et al. From a Population-Based Study Contribution of Atrial Fibrillation to Incidence and Outcome of Ischemic Stroke: Results From a Population-Based Study. Stroke. 2005;36:1115-9.
2. Aguilar MI, Hart R. Oral anticoagulants for preventing stroke in patients with non-valvular atrial fibrillation and no previous history of stroke or transient ischemic attacks. Cochrane Database of Syst Rev. 2005;(3):CD001927.
3. Levine MN, et al. Hemorrhagic complications of anticoagulant treatment. Chest. 2001;119(1,Suppl.):108S–21S.
4. Majeed A, et al. Management and outcomes of major bleeding during treatment with dabigatran or warfarin. Circulation. 2013; published online before print September 30 2013, doi:10.1161/CIRCULATIONAHA.113.00233
5. European Medicines Agency Press release - 25 May 2012: EMA/337406/2012. European Medicines Agency updates patient and prescriber information for Pradaxa. http://www.ema.europa.eu/ema/index.jsp?curl=pages/news_and_events/news/2012/05/news_detail_001518.jsp&mid=WC0b01ac058004d5c1 Last accessed 7 October 2013.
6. FDA Drug Safety Communication: Update on the risk for serious bleeding events with the anticoagulant Pradaxa (dabigatran) - 2 November 2012 http://www.fda.gov/Drugs/drugsafety/ucm326580.htm Last accessed 7 October2013.
7. Pradaxa® European Summary of Product Characteristics, 2013
8. Boehringer Ingelheim data on file.
9. Di Nisio M, et al. Direct thrombin inhibitors. N Engl J Med. 2005;353:1028-40.


Posted by E.Pomales P.E date 10/08/13 11:50 AM 11:50 AM  Category Pharma Industry

comments (0)  
0:1381251145:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Tuesday, October 8, 2013

Secukinumab (AIN457) showed superiority over Enbrel   Print



Novartis announced results from the head-to-head Phase III FIXTURE study showing secukinumab (AIN457), an interleukin-17A (IL-17A) inhibitor, was significantly superior to Enbrel®* (etanercept) in moderate-to-severe plaque psoriasis[1]. Enbrel is a current standard-of-care anti-TNF medication approved to treat moderate-to-severe plaque psoriasis. These new results were presented at the 22nd Congress of the European Association of Dermatology and Venereology (EADV) in Istanbul, Turkey[1].

The pivotal FIXTURE study met all primary and pre-specified key secondary endpoints(p<0.0001 for placebo comparisons and p=0.0250 for Enbrel comparisons)[1]. Both doses of secukinumab showed improved efficacy to Enbrel throughout the 52 week study, beginning as early as Week 2 and confirmed by Week 12 when the primary endpoints were assessed[1]. Importantly, more secukinumab patients experienced almost clear skin (described as PASI 90) and completely clear skin (PASI 100) compared to Enbrel[1], which are higher standards of skin clearance compared to the standard efficacy measures used in most psoriasis clinical studies.

"These exciting data suggest that with secukinumab, we have the potential to help more patients achieve clear skin, which is the ultimate treatment goal," said David Epstein, Head of the Pharmaceuticals Division of Novartis Pharma AG. "The data also show that specifically targeting IL-17A may offer a new and effective treatment approach for people living with moderate-to-severe plaque psoriasis."

FIXTURE compared two doses of secukinumab (300 mg and 150 mg) with Enbrel 50 mg and placebo[1]. The co-primary endpoints were assessed at Week 12 and compared secukinumab efficacy versus placebo according to the Psoriasis Area and Severity Index 75 (PASI 75) and the Investigator's Global Assessment (IGA mod 2011)[1].

The study also showed that 72% of secukinumab 300 mg patients experienced at least a 90% reduction in skin redness, thickness and scaling (PASI 90)[1] by Week 16 of the study. More than half (54%) of secukinumab 300 mg patients achieved PASI 90 as early as Week 12, compared to 21% of Enbrel patients[1]. Secukinumab 300 mg patients were also more likely to experience completely clear skin compared to those taking Enbrel in the study, as measured by PASI 100 at Week 12 (24% versus 4%)[1].

Secukinumab-treated patients also had their symptoms resolved faster than those treated with Enbrel in the study[1]. Clinically relevant differences were observed as early as Week 2, and on average secukinumab 300 mg patients had their symptoms halved by Week 3, compared to Week 8 for Enbrel patients[1].

Secukinumab efficacy was sustained over the full one year duration of the study. In FIXTURE, nearly twice as many patients treated with secukinumab 300 mg had a PASI 90 response at Week 52 compared to Enbrel (65% vs. 33%)[1].

There were no major safety signals identified in FIXTURE or the broader secukinumab Phase III clinical trial program in moderate-to-severe plaque psoriasis. In FIXTURE, the incidence of adverse events (AEs) was similar between both secukinumab treatment arms (300 mg and 150 mg), and was comparable to Enbrel[1]. The most common AEs in any treatment group (including placebo) throughout the 52 week treatment period were nasopharyngitis and headache (occurring in between 12-36 patients per 100 patient years in all groups)[1]. At the same time point, serious AEs (SAEs) were experienced by 6% of secukinumab 300 mg, 5% of secukinumab 150 mg and 6% of Enbrel patients[1]. There were no deaths reported during the study[1].

Secukinumab is the first therapy selectively targeting IL-17A to have Phase III results presented. IL-17A is a central cytokine (messenger protein) involved in the development of psoriasis, and is found in high concentrations in psoriasis skin plaques[2]-[4]. Research shows that IL-17A, in particular, plays a key role in driving the body's autoimmune response in disorders such as moderate-to-severe plaque psoriasis and is a preferred target for investigational therapies[2]-[6].

Nearly 3% of the world's population, or more than 125 million people, are affected by plaque psoriasis[7]. This is a common and debilitating disease - even those with very mild symptoms find their condition affects their everyday lives[8]. Psoriasis is also associated with psychosocial effects and those with more severe disease are at a greater risk of death from comorbid diseases such as heart disease and diabetes[9],[10].

Novartis announced top-line results from the FIXTURE study earlier this year. FIXTURE forms part of the robust secukinumab Phase III clinical trial program in moderate-to-severe plaque psoriasis that involved more than 3,300 patients in over 35 countries worldwide. Regulatory submissions of secukinumab in moderate-to-severe plaque psoriasis remain on track in the EU and US for the second half of 2013.

About FIXTURE and the secukinumab data presented at EADV
FIXTURE (the Full year Investigative eXamination of secukinumab vs. eTanercept Using 2 dosing Regimens to determine Efficacy in psoriasis) was a randomized double-blind, placebo-controlled, multicenter, global pivotal Phase III registration study involving 1,306 patients.

The co-primary endpoints were assessed at Week 12 and compared secukinumab efficacy versus placebo according to PASI 75 and IGA mod 20111. These endpoints were used also to demonstrate superiority of secukinumab vs. etanercept. Secondary measures included PASI 50, 75, 90 and 100 at different time points. In addition, the likelihood of loss of response at Week 52 was calculated by determining the proportion of patients who lost PASI 75 at Week 52, after initially achieving it at Week 12[1].

Data from an additional Phase III study of secukinumab in moderate-to-severe plaque psoriasis was also presented today at EADV[13]. The SCULPTURE (Study Comparing secukinumab Use in Long-term Psoriasis maintenance therapy: fixed regimens vs reTreatment Upon start of RElapse) trial found that patients who initially achieved PASI 75 at Week 12 were more likely to maintain their response if they received secukinumab at fixed monthly intervals, compared to treatment only on 'start of relapse'[13]. Results of two additional studies will also become available tomorrow.

About secukinumab (AIN457)
Secukinumab (AIN457) is a fully human IgG1 monoclonal antibody that selectively binds to and neutralizes IL-17A, a key pro-inflammatory cytokine[2]-[4]. Proof-of-concept and Phase II studies in moderate-to-severe plaque psoriasis and arthritic conditions (psoriatic arthritis, ankylosing spondylitis and rheumatoid arthritis) have suggested that secukinumab may potentially provide a new mechanism of action for the successful treatment of immune-mediated diseases[14]-[18]. Phase III results from two additional Phase III studies in moderate-to-severe plaque psoriasis will be presented in 2014, and in 2014 and beyond for arthritic conditions. Phase II studies are also ongoing in other areas, including multiple sclerosis.

About Novartis in specialty dermatology
Novartis is committed to developing innovative, life-changing specialty dermatology therapies redefining treatment paradigms and transforming patient care in severe skin diseases where there are remaining high unmet medical needs. The Novartis specialty dermatology portfolio includes two unique targeted products in Phase III development, secukinumab for moderate-to-severe plaque psoriasis and omalizumab (Xolair®) for chronic spontaneous urticaria (CSU). There are also more than 10 compounds in early stage development for a wide range of severe skin diseases in the Novartis specialty dermatology portfolio.

About Novartis
Novartis provides innovative healthcare solutions that address the evolving needs of patients and societies. Headquartered in Basel, Switzerland, Novartis offers a diversified portfolio to best meet these needs: innovative medicines, eye care, cost-saving generic pharmaceuticals, preventive vaccines and diagnostic tools, over-the-counter and animal health products. Novartis is the only global company with leading positions in these areas. In 2012, the Group achieved net sales of USD 56.7 billion, while R&D throughout the Group amounted to approximately USD 9.3 billion (USD 9.1 billion excluding impairment and amortization charges). Novartis Group companies employ approximately 131,000 full-time-equivalent associates and operate in more than 140 countries around the world.

*Enbrel® is a registered trademark of Amgen Inc.

1. Langley R, FIXTURE oral presentation at EADV.
2. Gaffen SL. Structure and signaling in the IL-17 receptor family. Nat Rev Immunol. 2009; 9(8):556-67.
3. Ivanov S, Linden A. Interleukin-17 as a drug target in human disease. Trends Pharmacol Sci. 2009; 30(2):95-103.
4 Kopf M, Bachmann MF, Marsland BJ. Averting inflammation by targeting the cytokine environment. Nat Rev Drug Discov. 2010; 9(9):703-718.
5. Onishi RM, Gaffen SL. Interleukin-17 and its target genes: mechanisms of interleukin-17 function in disease. Immunology. 2010; 129(3):311-321.
6. Krueger J, Fretzin S, Suárez-Fariñas M, et al. IL-17A is essential for cell activation and inflammatory gene circuits in subjects with psoriasis. J Allergy Clin Immunol. 2012; 130(1):145-154.
7. International Federation of Psoriasis Associations (IFPA) World Psoriasis Day website. "About Psoriasis." http://www.worldpsoriasisday.com/web/page.aspx?refid=114. Accessed August 2013.
8. Mason AR, Mason J, Cork M et al. Topical treatments for chronic plaque psoriasis. Cochrane Database Syst Rev. 2009;15;(2):CD005028.
9. Abuabara K, Azfar RS, Shin DB, Neimann AL, Troxel AB, Gelfand JM. Cause-specific mortality in patients with severe psoriasis: a population-based cohort study in the U.K. Br J Dermatol. 2010 Sep;163(3):586-592
10. Gelfand JM, Troxel AB, Lewis JD, Kurd SK, Shin DB, Wang X, Margolis DJ, Strom BL. The risk of mortality in patients with psoriasis: results from a population-based study. Arch Dermatol. 2007 Dec;143(12):1493-1499.
11. Papp KA, Tyring S,Lahfa M, et al._A global phase III randomized controlled trial of etanercept in psoriasis: safety, efficacy, and effect of dose reduction. Br J Dermatol 2005;152:1304-1312.
12. Leonardi CL, Powers JL, Matheson RT, et al. Etanercept as monotherapy in patients with psoriasis. NEJM 2003;349:2014-2022.
13. SCULPTURE oral presentation at EADV.
14. Papp KA, Langley RG, Sigurgeirsson B, et al. Efficacy and safety of secukinumab in the treatment of moderate-to-severe plaque psoriasis: a randomized, double-blind, placebo-controlled phase II dose-ranging study. BJD 2013; 168, pp412-421.
15. Rich PA, Sigurgeirsson B, Thaci D, et al. Secukinumab induction and maintenance therapy in moderate-to-severe plaque psoriasis: a randomized, double-blind, placebo-controlled, phase II regimen-finding study. BJD 2013;168: 402-411.
16. Genovese MC, Durez P, Richards HB, et al. Efficacy and safety of secukinumab in patients with rheumatoid arthritis: a phase II, dose-finding, double-blind, randomised, placebo controlled study. Ann Rheum Dis 2013;72:863-869.
17. Baeten D, Sieper J, Emery P, et al. The anti-il17a monoclonal antibody secukinumab (AIN457) showed good safety and efficacy in the treatment of active ankylosing spondylitis. At: EULAR 2011, The Annual European Congress of Rheumatology, 25-28 May 2011, London, UK. Abstract 0174.
18. McInnes IB, Sieper J, Braun J, et al. Efficacy and safety of secukinumab, a fully human anti-interleukin-17A monoclonal antibody, in patients with moderate-to-severe psoriatic arthritis: a 24-week, randomised, double-blind, placebo-controlled, phase II proof-of-concept trial. Ann Rheum Dis 2013 Jan 29; doi:10.1136/annrheumdis-2012-202646.


Posted by E.Pomales P.E date 10/08/13 11:48 AM 11:48 AM  Category Pharma Research & Development

comments (0)  
0:1381251012:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Monday, October 7, 2013

Pradaxa® (dabigatran etexilate) drives Boehringer Ingelheim's innovation promise in cardiovascular diseases   Print



Boehringer Ingelheim announces new milestones for the novel oral anticoagulant Pradaxa® (dabigatran etexilate) with over two million patient-years of experience in all licensed indications globally.(1) In addition, the company confirms research currently underway for Pradaxa® in new cardiovascular patient populations, as well as robust plans to gather real-world evidence in patients with non-valvular atrial fibrillation (NVAF). These plans are the cornerstone of an initiative to expand the scientific knowledge of stroke prevention and interventional cardiology with Pradaxa®, and demonstrate Boehringer Ingelheim's leadership and commitment to innovative solutions for patients and healthcare providers.

Initiating new research will help strengthen understanding of the safety and efficacy profile of Pradaxa®, the longest studied novel oral anticoagulant (NOAC). Since its discovery 20 years ago, Pradaxa® has been evaluated through the extensive RE-VOLUTION® clinical trial programme, which includes 10 clinical trials involving more than 40,000 patients in over 100 countries globally.(2-12)

"We are building upon Pradaxa®'s strong foundation in clinical research and prescribing experience to deepen our understanding of the treatment's benefit/risk profile to address evolving patient needs and benefit the cardiovascular community as a whole," commented Professor Klaus Dugi, Corporate Senior Vice President Medicine, Boehringer Ingelheim. "Now, we are in discussions regarding plans for important new clinical trials where we see unmet patient need. Details of these plans will be announced in the near future."

The efficacy and safety of Pradaxa® to reduce risk of stroke and systemic embolism in patients with NVAF was established in the pivotal RE-LY® trial, one of the largest stroke prevention clinical studies ever conducted with NVAF patients.(9,10) To date, Pradaxa® has also been studied in the following areas:

  • Prevention of venous thromboembolism (VTE) in patients undergoing elective total hip and knee replacement surgery (2-5)
  • Acute treatment of deep vein thrombosis (DVT) or primary embolism (PE) (6,7)
  • Prevention of recurrent DVT or PE8

The company recently announced it had submitted applications to U.S. and EU regulatory authorities to review Pradaxa® for use in patients with DVT and PE.

Current Pradaxa® Research Underway
Currently, there are 12 Boehringer Ingelheim-sponsored trials of Pradaxa® in progress. These include studies which investigate Pradaxa® in patients with impaired renal function, as well as paediatric patients, and studies which explore management strategies for gastrointestinal symptoms. An investigational antidote is progressing through phase I clinical research for the reversal of Pradaxa®-induced anticoagulation.(13)

Additionally, the company is collecting important data on the use of Pradaxa® in clinical practice with ongoing long-term study programmes and registries. The GLORIATM-AF Registry Program is set to become one of the largest worldwide registries with the objective of understanding the long-term use of oral antithrombotic therapies in the prevention of non-valvular AF-related strokes in a real-world setting. Currently, the GLORIATM-AF Registry Program actively recruiting in 35 countries, with a planned enrolment of up to 56,000 patients worldwide. In addition, Boehringer Ingelheim is working in close collaboration with leading hospitals, insurance, healthcare research and governmental organisations in the USA to further assess the real-world clinical usage of Pradaxa®.

Current Experience with Pradaxa®
Globally, Pradaxa® is currently approved in over 100 countries for the prevention of stroke and systemic embolism in patients with non-valvular atrial fibrillation and primary prevention of VTE following total hip replacement or total knee replacement surgery.(1,14) The extensive in-market experience of over 2 million patient-years puts Pradaxa® first among the novel oral anticoagulants. Since its approval in NVAF, Pradaxa® is estimated to have prevented up to 93,000 strokes in NVAF patients compared to no treatment.(1) Combined experience from the clinical trials programme and real-world clinical practice establish Pradaxa® as the longest and most extensively studied novel oral anticoagulant.(1)

About Pradaxa® (dabigatran etexilate)
Prescribing experience with Pradaxa® continues to grow with over two million patient-years of experience in all licensed indications globally.(1)

Pradaxa® is approved in over 100 countries worldwide.(1) It is licensed for the prevention of stroke and systemic embolism in patients with non-valvular atrial fibrillation and for the primary prevention of venous thromboembolism in patients undergoing total hip replacement or total knee replacement surgery.(14)

Pradaxa®, a direct thrombin inhibitor (DTI),(15) was the first of a new generation of direct oral anticoagulants targeting a high unmet medical need in the prevention and treatment of acute and chronic thromboembolic diseases.

Potent antithrombotic effects are achieved with direct thrombin inhibitors by specifically blocking the activity of thrombin (both free and clot-bound), the central enzyme in the process responsible for clot (thrombus) formation. In contrast to vitamin-K antagonists, which variably act via different coagulation factors, dabigatran etexilate provides effective, predictable and consistent anticoagulation with a low potential for drug-drug interactions and no drug-food interactions, without the need for routine coagulation monitoring or dose adjustment.

About Boehringer Ingelheim
The Boehringer Ingelheim group is one of the world's 20 leading pharmaceutical companies. Headquartered in Ingelheim, Germany, it operates globally with 140 affiliates and more than 46,000 employees. Since it was founded in 1885, the family-owned company has been committed to researching, developing, manufacturing and marketing novel medications of high therapeutic value for human and veterinary medicine.

As a central element of its culture, Boehringer Ingelheim pledges to act socially responsible. Involvement in social projects, caring for employees and their families, and providing equal opportunities for all employees form the foundation of the global operations. Mutual cooperation and respect, as well as environmental protection and sustainability are intrinsic factors in all of Boehringer Ingelheim's endeavors.

In 2012, Boehringer Ingelheim achieved net sales of about 14.7 billion euro. R&D expenditure in the business area Prescription Medicines corresponds to 22.5% of its net sales.

1. Boehringer Ingelheim data on file
2. Eriksson BI. et al. Dabigatran etexilate versus enoxaparin for prevention of venous thromboembolism after total hip replacement: a randomised, double-blind, non-inferiority trial. Lancet. 2007;370:949–56.
3. Eriksson BI. et al. Oral dabigatran versus enoxaparin for thromboprophylaxis after primary total hip arthroplasty (RE-NOVATE II*). A randomised, doubleblind, non-inferiority trial. Thromb Haemost. 2011;105(4):721-9.
4. Eriksson BI. et al. Oral dabigatran etexilate vs. subcutaneous enoxaparin for the prevention of venous thromboembolism after total knee replacement: the RE-MODEL randomized trial. J Thromb Haemost. 2007;5:2178–85.
5. Ginsberg JS. et al. Oral thrombin inhibitor dabigatran etexilate vs North American enoxaparin regimen for prevention of venous thromboembolism after knee arthroplasty surgery. J Arthoplasty. 2009;24(1)1–9.
6. Schulman S. et al. Dabigatran versus warfarin in the Treatment of Acute Venous Thromboembolism. N Engl J Med. 2009;361:2342–52.
7. Schulman S. et al. A Randomized Trial of Dabigatran Versus Warfarin in the Treatment of Acute Venous Thromboembolism  (RE-COVER II). Oral presentation from Session 332: Antithrombotic Therapy 1. Presented on 12 December at the American Society of Hematology (ASH) Annual Meeting 2011.
8. Schulman S. et al. Extended Use of Dabigatran, Warfarin or Placebo in Venous Thromboembolism. N Engl J Med. 2013;368:709–18.
9. Connolly SJ. et al. Dabigatran versus warfarin in patients with atrial fibrillation. N Engl J Med. 2009;361:1139-51.
10. Connolly SJ. et al. Newly identified events in the RE-LY® trial. N Engl J Med. 2010;363:1875-6.
11. Connolly S. J. et al. The Long Term Multi-Center Extension of Dabigatran Treatment in Patients with Atrial Fibrillation (RELY-ABLE) study. Circulation. 2013;128:237-243.
12. Oldgren J. et al. Dabigatran vs. placebo in patients with acute coronary syndromes on dual antiplatelet therapy: a randomized, double-blind, phase II trial. Eur Heart J. 2011;32:2781-9.
13. van Ryn J. et al. Reversal of dabigatran clotting activity in the rat ex vivo by a specific and selective antibody fragment antidote: are there non-specific effects on warfarin, rivaroxaban and apixaban? Poster P4848 to be presented on 3 September 2013 at ESC Congress 2013 (31 August – 4 September, Amsterdam, The Netherlands)
14. Pradaxa Summary of Product Characteristics, 2013
15. Di Nisio M, et al. Direct thrombin inhibitors. N Engl J Med. 2005; 353:1028-40.


Posted by E.Pomales P.E date 10/07/13 06:35 AM 06:35 AM  Category Pharma Research & Development

comments (0)  
0:1381145840:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Monday, October 7, 2013

Benefits of Pradaxa® maintained in difficult to treat patients with atrial fibrillation and symptomatic heart failure   Print



Published in the European Journal of Heart Failure1, results from a new sub-analysis of the RE-LY®* trial demonstrate important benefits of Pradaxa® (dabigatran etexilate) over warfarin in difficult-to-treat patients with non-valvular atrial fibrillation (AF) and previous symptomatic heart failure (HF). The outcomes in heart failure patients were consistent with the results from the main RE-LY® trial: Pradaxa® 150mg twice daily reduced the risk of stroke including ischaemic stroke with similar rates of major bleeding compared to warfarin and Pradaxa® 110mg twice daily showed similar rates of stroke but significantly reduced major bleeding compared to warfarin. Importantly, both doses of Pradaxa® significantly reduced intracranial as well as total bleeding.(1,2,3)

The new results support that Pradaxa® can also be beneficial for patients with multiple co-morbidities and further reinforce the consistent efficacy and safety profile of Pradaxa® that was shown in the main trial and various previous RE-LY® sub-analyses in several other patient subgroups, including patients with type 2 diabetes.(4-8)

"Atrial fibrillation and heart failure frequently coexist and are known to worsen patient prognoses. Heart failure is a specific risk factor for stroke in atrial fibrillation patients, therefore these patients especially need anticoagulant treatment," said Dr. Jorge Ferreira, Cardiologist, Hospital Santa Cruz, Lisbon, Portugal. "However, anticoagulation with a vitamin-K-antagonist in these patients is often associated with poor INR control. This results in more challenging management and impacts the overall efficacy and safety of VKA treatment."

From a total of 18,113 patients with non-valvular AF participating in the landmark RE-LY® trial, 4,904 patients (27%) had previous symptomatic HF.(2,3) In the sub-analysis comparing the main outcomes of stroke and systemic embolism as well as major bleeding between patients with and without previous symptomatic heart failure, the results were consistent with no statistically significant differences (p-values for interaction) between the two patient groups.(1) These results show that Pradaxa® is a valuable treatment alternative for patients who suffer from both atrial fibrillation and heart failure.

"These new results from RE-LY® highlight the value of Pradaxa® also for those atrial fibrillation patients that are traditionally considered difficult to treat and have multiple co-morbidities," commented Professor Klaus Dugi, Corporate Senior Vice President Medicine, Boehringer Ingelheim.

Pradaxa® 150mg bid is the only novel oral anticoagulant, study of which has shown a significant reduction in the incidence of ischaemic strokes in patients with non-valvular AF compared to warfarin (67% median time in therapeutic range9), offering a relative risk reduction of 25%.2,3 Nine out of ten strokes in AF patients are ischemic strokes,(10) which can result in irreversible neurological injury with profound long-term consequences such as paralysis or inability to move one's limbs or formulate speech.(11)

Furthermore in the main RE-LY® trial, Pradaxa® 150mg twice daily provided a 36% reduction in the overall risk of stroke versus warfarin, demonstrating superior protection.(2,3) Pradaxa® 110mg twice daily, indicated for certain patients, was as effective as warfarin for the prevention of stroke and systemic embolism.(2,3) Both doses of Pradaxa® were associated with significantly lower total, intracranial and life-threatening bleeding compared to warfarin.(2,3) Pradaxa® 150mg twice daily showed a similar risk of major bleeds versus warfarin while Pradaxa® 110mg twice daily demonstrated a significantly lower risk.(2,3) The RE-LY® trial was performed in PROBE design, i.e. prospective, randomized, open-label with blinded endpoint evaluation.

Pradaxa® is already widely approved for the prevention of stroke and systemic embolism in patient with non-valvular atrial fibrillation and for primary prevention of VTE following total hip replacement or total knee replacement surgery.(9) Over 1.6 million patient years of experience in all licensed indications in over 100 countries support Pradaxa® as the leading novel oral anticoagulant.(12)

About Pradaxa® (dabigatran etexilate)
Pradaxa® is approved in over 100 countries worldwide.(12) It is licensed for the prevention of stroke and systemic embolism in patients with non-valvular atrial fibrillation and for the primary prevention of venous thromboembolism in patients undergoing total hip replacement or total knee replacement surgery.(9)

Pradaxa®, a direct thrombin inhibitor (DTI),13 was the first of a new generation of direct oral anticoagulants targeting a high unmet medical need in the prevention and treatment of acute and chronic thromboembolic diseases.

Potent antithrombotic effects are achieved with direct thrombin inhibitors by specifically blocking the activity of thrombin (both free and clot-bound), the central enzyme in the process responsible for clot (thrombus) formation. In contrast to vitamin-K antagonists, which variably act via different coagulation factors, dabigatran etexilate provides effective, predictable and consistent anticoagulation with a low potential for drug-drug interactions and no drug-food interactions, without the need for routine coagulation monitoring or dose adjustment.

About RE-LY®
RE-LY® (Randomized Evaluation of Long term anticoagulant therapY) was a global, phase III, PROBE (prospective, randomized, open-label with blinded endpoint evaluation) trial of 18,113 patients enrolled in over 900 centres in 44 countries designed to compare two fixed doses of the oral direct thrombin inhibitor dabigatran (110mg and 150mg twice daily) each administered in a blinded manner, with open label warfarin (INR 2.0-3.0, median TTR 67%9).(2,3) Patients were followed-up in the study for a median of 2 years with a minimum of 1 year follow-up.(2)

The primary endpoint of the trial was incidence of stroke (including haemorrhagic) or systemic embolism. Secondary outcome measures included all-cause death, incidence of stroke (including haemorrhagic), systemic embolism, pulmonary embolism, acute myocardial infarction, and vascular death (including death from bleeding).(2,3)

Compared to warfarin, dabigatran etexilate showed in the trial:(2,3)

  • Significant reduction in the risk of stroke and systemic embolism - including ischaemic strokes with dabigatran etexilate 150mg twice daily
  • Similar rates of stroke/systemic embolism with dabigatran etexilate 110mg twice daily (110mg indicated for certain patients)
  • Significantly lower major bleeding events with dabigatran etexilate 110mg twice daily
  • Significantly lower life threatening and intracranial bleeding with both doses
  • Significant reduction in vascular mortality with dabigatran etexilate 150mg twice daily.

About the new RE-LY® sub-analysis
From a total of 18,113 patients with non-valvular AF participating in the landmark RE-LY® trial, 4,904 patients (27%) had previous symptomatic HF.(2,3) Despite the challenges of VKA therapy in these patients, the heart failure patients included in the RE-LY® trial spent 63.8% of their time within the therapeutic range (INR 2.0-3.0).(1) In the sub-analysis, the results were consistent between patients with and without HF, showing that Pradaxa® is a valuable treatment alternative for patients with AF and HF:(1)

  • Annual rates of stroke and systemic embolism were 1.44% per year for Pradaxa® 150mg twice daily and 1.90% per year for Pradaxa® 110mg twice daily versus 1.92% per year for warfarin
  • Annual rates of major bleeding were 3.10% per year for Pradaxa® 150mg twice daily and 3.26% per year for Pradaxa® 110mg twice daily versus 3.90% per year for warfarin
  • Rates of intracranial haemorrhage, one of the most devastating complications of anticoagulation,14 were 0.26% per year for Pradaxa® 150mg twice daily and 0.22% per year for Pradaxa® 110mg twice daily versus 0.65% per year for warfarin

Stroke Prevention in Atrial Fibrillation (AF)
AF is the most common sustained heart rhythm condition,(15) with one in four adults over the age of 4015 developing the condition in their lifetime. People with AF are more likely to experience blood clots, which increases the risk of stroke by five-fold.(16) Up to three million people worldwide suffer strokes related to AF each year.(17,18) Strokes due to AF tend to be severe, with an increased likelihood of death (20%), and disability (60%).(19)

Ischaemic strokes are the most common type of AF-related stroke, accounting for 92% of strokes and frequently leading to severe debilitation.(10) Appropriate anticoagulation therapy can help to prevent many types of AF-related strokes and improve overall patient outcomes.(20) Pradaxa® 150mg twice daily is the only novel oral anticoagulant, for which its pivotal trial vs. warfarin has shown a statistically significant and clinically relevant reduction of both ischaemic and haemorrhagic strokes.(2,3) Additionally, treatment with Pradaxa® is associated with substantially lower rates of both fatal and non-fatal intracranial haemorrhage, one of the most devastating complications of anticoagulation therapy.(14,21)

Worldwide, AF is an extremely costly public health problem, with treatment costs equating to $6.65 billion in the US and over €6.2 billion across Europe each year.(22,23) Given AF-related strokes tend to be more severe, this results in higher direct medical patient costs annually.(24) The total societal burden of AF-related stroke reaches €13.5 billion per year in the European Union alone.(25)

About Boehringer Ingelheim
The Boehringer Ingelheim group is one of the world's 20 leading pharmaceutical companies. Headquartered in Ingelheim, Germany, it operates globally with 140 affiliates and more than 46,000 employees. Since it was founded in 1885, the family-owned company has been committed to researching, developing, manufacturing and marketing novel medications of high therapeutic value for human and veterinary medicine.

As a central element of its culture, Boehringer Ingelheim pledges to act socially responsible. Involvement in social projects, caring for employees and their families, and providing equal opportunities for all employees form the foundation of the global operations. Mutual cooperation and respect, as well as environmental protection and sustainability are intrinsic factors in all of Boehringer Ingelheim's endeavors.

In 2012, Boehringer Ingelheim achieved net sales of about 14.7 billion euro. R&D expenditure in the business area Prescription Medicines corresponds to 22.5% of its net sales.

1. Ferreira J, et al. Dabigatran compared with warfarin in patients with atrial fibrillation and symptomatic heart failure: a subgroup analysis of the RE-LY trial. Eur J of Heart Failure. 2013; doi 10.1093/eurjhf/hft111.
2. Connolly SJ, et al. Dabigatran versus warfarin in patients with atrial fibrillation. N Engl J Med. 2009;361:1139-51.
3. Connolly SJ, et al. Newly identified events in the RE-LY trial. N Engl J Med. 2010;363:1875-6.
4. Diener HC, et al. Dabigatran compared with warfarin in patients with atrial fibrillation and previous transient ischaemic attack or stroke: a subgroup analysis of the RE-LY trial. Lancet Neurology. 2010;9:1157–1163.
5. Wallentin L, et al. Efficacy and safety of dabigatran compared with warfarin at different levels of international normalised ratio control for stroke prevention in atrial fibrillation: an analysis of the RE-LY trial. Lancet. 2010;376:975–983.
6. Eikelboom JW, et al. Risk of bleeding with 2 doses of dabigatran compared with warfarin in older and younger patients with atrial fibrillation: an analysis of the Randomized Evaluation of Long-Term Anticoagulant Therapy (RE-LY) trial. Circulation. 2011;123:2363–2372.
7. Oldgren J, et al. Risks for stroke, bleeding, and death in patients with atrial fibrillation receiving dabigatran or warfarin in relation to the CHADS2 score: a subgroup analysis of the RE-LY trial. Ann Intern Med. 2012;155:660–668.
8. Darius H, et al. Comparison of Dabigatran versus Warfarin in Diabetic Patients with Atrial Fibrillation: Results from the RE-LY Trial. Presented at the American Heart Association's Scientific Sessions 2012, Los Angeles, USA. Abstract No. 15937.
9. Pradaxa European Summary of Product Characteristics, 2013
10. Andersen KK, et al. Hemorrhagic and ischemic strokes compared: stroke severity, mortality, and risk factors. Stroke. 2009;40:2068−72.
11. NHLBI website. "What is Stroke?” Available at: http://www.nhlbi.nih.gov/health/health-topics/topics/stroke. Accessed on: October 10, 2012.
12. Boehringer Ingelheim data on file.
13. Di Nisio M, et al. Direct thrombin inhibitors. N Engl J Med 2005; 353:1028-40.
14. Hart RG, et al. Intracranial hemorrhage in atrial fibrillation patients during anticoagulation with Warfarin or Dabigatran: The RE-LY® Trial. Stroke. 2012;43(6):1511-1517.
15. Lloyd-Jones DM, et al. Lifetime risk for development of atrial fibrillation: the Framingham Heart Study. Circulation. 2004;110:1042-6.
16. Camm JA, et al. 2012 focused update of the ESC guidelines for the management of atrial fibrillation. Eur Heart J. 2012;33:2719-47.
17. Camm JA, et al. Guidelines for the management of atrial fibrillation. European Heart Journal .2010;31:2369–2429.
18. Atlas of Heart Disease and Stroke, World Health Organization, September 2004. Viewed Nov 2012 at http://www.who.int/cardiovascular_diseases/en/cvd_atlas_15_burden_stroke.pdf
19. Gladstone DJ, et al. Potentially Preventable Strokes in High-Risk Patients With Atrial Fibrillation Who Are Not Adequately Anticoagulated. Stroke. 2009;40:235-240.
20. Aguilar MI, Hart R. Oral anticoagulants for preventing stroke in patients with non-valvular atrial fibrillation and no previous history of stroke or transient ischemic attacks. Cochrane Database of Systematic Reviews. 2005, Issue 3. Art. No.: CD001927.
21. Fang MC, et al. Death and disability from warfarin-associated intracranial and extracranial hemorrhages. Am J Med. 2007; 120:700 –705.
22. Coyne KS, et al. Assessing the direct costs of treating nonvalvular atrial fibrillation in the United States. Value Health 2006; 9:348-56.
23. Ringborg A, et al. Costs of atrial fibrillation in five European countries: results from the Euro Heart Survey on atrial fibrillation. Europace 2008; 10:403-11.
24. Brüggenjürgen B, et al. The impact of atrial fibrillation on the cost of stroke: the Berlin acute stroke study. Value Health 2007;10:137-43.
25. Fuster V, et al. ACC/AHA/ESC 2006 Guidelines for the management of patients with atrial fibrillation – executive summary. Circulation. 2006;114:700-52.

*RE-LY® was a PROBE trial (prospective, randomized, open-label with blinded endpoint evaluation), comparing two fixed doses of the oral direct thrombin inhibitor dabigatran etexilate (110 mg bid and 150 mg bid) each administered in a blinded manner, with open label warfarin.(2,3)



Posted by E.Pomales P.E date 10/07/13 06:33 AM 06:33 AM  Category Pharma Research & Development

comments (0)  
0:1381145721:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Monday, October 7, 2013

Bayer's investigational drug Riociguat granted FDA orphan drug designation   Print



Bayer HealthCare today announced that the U.S. Food and Drug Administration's (FDA) Office of Orphan Products Development has granted two separate orphan drug designations for its investigational, oral medication riociguat for the treatment of pulmonary arterial hypertension (PAH) and chronic thromboembolic pulmonary hypertension (CTEPH). The Orphan Drug Designation program provides orphan status to drugs and biologics that are defined as those intended for the safe and effective treatment, diagnosis or prevention of rare diseases and disorders.

Bayer submitted a New Drug Application (NDA) for riociguat in February 2013 for two indications: (i) the treatment of PAH (WHO Group 1) to improve exercise capacity, improve WHO functional class and delay clinical worsening; and (ii) the treatment of persistent/recurrent CTEPH (WHO Group 4) after surgical treatment or inoperable CTEPH to improve exercise capacity and WHO functional class.

Riociguat has been approved under the trade name Adempas® by Health Canada for the treatment of inoperable, or persistent /recurrent chronic thromboembolic pulmonary hypertension (CTEPH) after surgery in adult patients with WHO Functional Class II or III pulmonary hypertension in September.

PAH and CTEPH are both life-threatening forms of pulmonary hypertension that cause significantly increased pressure in the pulmonary arteries. Riociguat is an investigational, oral medication for the treatment of adult patients with PAH or persistent/recurrent CTEPH after surgical treatment or inoperable CTEPH. If approved by the FDA, it would create a new class of drugs available in the U.S. Pulmonary hypertension is associated with endothelial dysfunction, impaired synthesis of nitric oxide (NO) and insufficient stimulation of soluble guanylate cyclase (sGC). Riociguat stimulates sGC independent of NO and increases the sensitivity of sGC to NO.

About Pulmonary Arterial Hypertension (PAH)
PAH, one of the five types of pulmonary hypertension (PH), is a progressive and life-threatening disease in which the blood pressure in the pulmonary arteries is significantly increased due to vasoconstriction and which can lead to heart failure and death. PAH is characterized by morphological changes to the endothelium of the artery of the lungs causing remodeling of the tissue, vasoconstriction and thrombosis-in-situ. As a result of these changes, the blood vessels in the lungs are narrowed, making it difficult for the heart to pump blood through to the lungs. PAH is a rare disease and affects an estimated 52 people per million globally. It is more prevalent in younger women than men. In most cases, PAH has no known cause and, in some cases, it can be inherited.

Despite the availability and advantages of several approved PAH therapies, the prognosis of patients remains poor and new treatment options are needed. Mortality of PAH patients remains high and is still 15% at 1 year; 32% at 3 years after diagnosis.

About Chronic Thromboembolic Pulmonary Hypertension (CTEPH)
CTEPH is a progressive and life-threatening disease and a type of PH, in which it is believed that thromboembolic occlusion (organized blood clots) of pulmonary vessels gradually lead to an increased blood pressure in the pulmonary arteries, resulting in an overload of the right heart. CTEPH is a rare disease and is comparable in terms of population size to PAH, though there are fewer diagnoses made so far. CTEPH may evolve after prior episodes of acute pulmonary embolism, but the pathogenesis is not yet completely understood. The standard and potentially curative treatment for CTEPH is pulmonary endarterectomy (PEA), a surgical procedure in which the blood vessels of the lungs are cleared of clot and scar material. However, a considerable number of patients with CTEPH (20%-40%) are not operable and in up to 35% of patients, the disease persists or reoccurs after PEA. To date, no approved pharmacological therapy exists for CTEPH and, as a result, there is an urgent unmet medical need for patients who are unable to undergo surgery or who have persistent or recurrent pulmonary hypertension (PH) after surgery.

About Riociguat
Riociguat (BAY 63-2521) is a soluble guanylate cyclase (sGC) stimulator, the first member of a novel class of compounds, discovered and developed by Bayer as an oral treatment to target a key molecular mechanism underlying PH. Riociguat is being investigated as a new and specific approach to treat different types of PH. sGC is an enzyme found in the cardiopulmonary system and the receptor for nitric oxide (NO). When NO binds to sGC, the enzyme enhances synthesis of the signaling molecule cyclic guanosine monophosphate (cGMP). cGMP plays an important role in regulating vascular tone, proliferation, fibrosis, and inflammation.

PH is associated with endothelial dysfunction, impaired synthesis of NO and insufficient stimulation of sGC. Riociguat has a unique mode of action - it sensitizes sGC to endogenous NO by stabilizing the NO-sGC binding. Riociguat also directly stimulates sGC via a different binding site, independently of NO. Riociguat, as a stimulator of sGC, addresses NO deficiency by restoring the NO-sGC-cGMP pathway, leading to increased generation of cGMP.

With its novel mode of action, Riociguat has the potential to overcome a number of limitations of currently approved PAH therapies, including NO dependence, and is the first drug which has shown clinical benefits in CTEPH, where no pharmacological treatment is approved.

About Bayer HealthCare
The Bayer Group is a global enterprise with core competencies in the fields of health care, agriculture and high-tech materials. Bayer HealthCare, a subgroup of Bayer AG with annual sales of EUR 18.6 billion (2012), is one of the world's leading, innovative companies in the healthcare and medical products industry and is based in Leverkusen, Germany. The company combines the global activities of the Animal Health, Consumer Care, Medical Care and Pharmaceuticals divisions. Bayer HealthCare's aim is to discover, develop, manufacture and market products that will improve human and animal health worldwide. Bayer HealthCare has a global workforce of 54,900 employees (Dec 31, 2012) and is represented in more than 100 countries.


Posted by E.Pomales P.E date 10/07/13 06:30 AM 06:30 AM  Category Pharma Research & Development

comments (0)  
0:1381145495:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Monday, October 7, 2013

Abbott to collaborate with Janssen and Pharmacyclics   Print



Abbott (NYSE: ABT) announced that it will collaborate with Janssen Biotech, Inc. and Pharmacyclics, Inc. to explore the benefits of Abbott's proprietary FISH (fluorescence in situ hybridization) technology for use in developing a molecular companion diagnostic test to identify patients with a genetic subtype of chronic lymphocytic leukemia (CLL), the most common form of adult leukemia.

Under the agreement, Abbott will develop a FISH-based test to identify high-risk CLL patients who have a deletion within a specific chromosome (chromosome 17p (del17p)) and may respond to ibrutinib, an oral, small molecule inhibitor of Bruton tyrosine kinase (BTK). Ibrutinib is currently in development by Janssen and Pharmacyclics for several B-cell malignancies, including chronic leukemia and lymphoma. Patients harboring a deletion within chromosome 17p are poor responders to chemoimmunotherapy and have limited treatment options. Having a test that is able to accurately detect the 17p deletion identifies a specific patient population with a high unmet medical need.

"Like Abbott's other collaborations in the area of companion diagnostics, our goal is to leverage molecular technologies to help ensure that the right medicine is getting to the right person," said John Coulter, vice president, Molecular Diagnostics, Abbott. "Cancer is a complex disease where, historically, therapies have demonstrated only a 25 percent efficacy rate. Companion diagnostic tests can help improve these outcomes by selecting patients that are more likely to respond to specific therapies, reducing time to the most effective treatment and increasing the number of positive outcomes."

According to the American Society of Clinical Oncology (ASCO), future cancer therapies will be developed through molecular approaches that can accelerate development of more effective, personalized treatments. Identifying specific genetic characteristics of malignancies is expected to also support development of new treatments that target specific proteins involved in the development and growth of cancer.

In 2011, Abbott received U.S. Food and Drug Administration clearance for its Vysis CLL FISH Probe Kit. The kit targets multiple genes, including TP53 (tumor protein p53 gene, located on chromosome 17p) within the del17p region, and is used as an aid for determining prognosis for patients with CLL. Abbott's Vysis CLL FISH Probe Kit will be used for investigational use only to determine genetic marker status as part of the co-development efforts between Janssen, Pharmacyclics and Abbott.

About FISH
FISH (fluorescence in-situ hybridization) technology has a variety of uses. It can identify whether too many, or too few, copies of a particular gene are present in the body's cells or whether certain genes have rearrangements that play an active role in disease progression. Since the technology works especially well for identifying genetic markers in solid tumors, cancer diagnostics are one of the fastest growing applications.

About Abbott Molecular
Abbott Molecular is a leader in molecular diagnostics - the analysis of DNA and RNA at the molecular level. Abbott Molecular's tests can also detect subtle but key changes in patients' genes and chromosomes and have the potential to aid with early detection or diagnosis, can influence the selection of appropriate therapies, and may assist with monitoring of disease progression.

About Abbott
Abbott is a global healthcare company devoted to improving life through the development of products and technologies that span the breadth of healthcare. With a portfolio of leading, science-based offerings in diagnostics, medical devices, nutritionals and branded generic pharmaceuticals, Abbott serves people in more than 150 countries and employs approximately 70,000.


Posted by E.Pomales P.E date 10/07/13 06:27 AM 06:27 AM  Category Pharma Industry

comments (0)  
0:1381145281:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Monday, October 7, 2013

Human skin wound dressings to treat cutaneous ulcers   Print



Researchers from Université Laval's Faculty of Medicine and CHU de Québec have shown that it is possible to treat venous ulcers unresponsive to conventional treatment with wound dressings made from human skin grown in vitro. A study published recently in the journal Advances in Skin and Wound Care demonstrates how this approach was successfully used to treat venous lower-extremity ulcers in patients who had been chronically suffering from such wounds.

About 1% of the population suffers from lower-extremity ulcers. These wounds regularly become inflamed or infected and are very slow to heal, if they do at all. They are frequently associated with aging, diabetes, and circulatory system disorders such as varicose veins and oedema. "Obese individuals and those who work constantly standing up are especially vulnerable. These ulcers can persist for years. It can be a hellish clinical situation when standard treatments don't work," noted Dr. François A. Auger, director of both the study and LOEX, the tissue engineering and regenerative medicine laboratory where it was conducted.

Standard treatment for ulcers involves methodically cleaning these wounds and applying compression bandages. Drugs became available around 20 years ago but they are expensive and their efficacy has been somewhat limited. A graft using the patient's own skin can be effective but is problematic because it requires a significant amount of skin to be removed from elsewhere on the body.

This very problem inspired LOEX researchers to use their expertise with in vitro skin culture to create biomaterial-free biological wound dressing. The process is complex and requires several steps: removing 1 cm2 of skin from the patient, isolating the appropriate cells, growing them in vitro, and creating a skin substitute with both dermis and epidermis. After eight weeks of growth the self-assembled sheets of skin substitute can be applied over the ulcers, much like bandages, and replaced weekly as long as necessary. "This totally biological bandage is much more than a physical barrier," stresses Dr. Auger. "The cells secrete molecules that speed up healing by helping to set natural healing processes in motion. It would be hard to imagine a model closer to the human body's natural physiology."

Tests were successfully carried out on five patients. It took only an average of seven weeks to cure 14 ulcers that had been affecting patients for at least six months, and in some cases, several years. "This is a last recourse once all other treatment options have been exhausted," notes François A. Auger.

Dr. Auger sees another promising application for these biological bandages: "We have shown that this is effective for patients with leg ulcers. Now, we intend to carry out a clinical study to demonstrate that the same treatment works for patients with serious burns, as soon as we get the necessary approvals."

In addition to Dr. Auger, the study's co-authors are: Olivier Boa, Chanel Beaudoin Cloutier, Hervé Genest, Raymond Labbé, Bertrand Rodrigue, Jacques Soucy, Michel Roy, Frédéric Arsenault, Carlos E. Ospina, Nathalie Dubé, Marie-Hélène Rochon, Danielle Larouche, Véronique J. Moulin, and Lucie Germain.


Posted by E.Pomales P.E date 10/07/13 06:25 AM 06:25 AM  Category Pharma Research & Development

comments (0)  
0:1381145186:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Monday, October 7, 2013

Chemists find new way to put the brakes on cancer   Print



While great strides have been achieved in cancer treatment, scientists are looking for the new targets and next generation of therapeutics to stop this second leading cause of death nationwide. A new platform for drug discovery has been developed through a collaborative effort linking chemists at NYU and pharmacologists at USC.

In a study appearing in Proceedings of the National Academy of Sciences, the research groups of Paramjit Arora, a professor in NYU's Department of Chemistry, and Bogdan Olenyuk from the USC School of Pharmacy have developed a synthetic molecule, "protein domain mimetic," which targets the interaction between two proteins, called transcription factor-coactivator complex at the point where intracellular signaling cascade converges resulting in an up-regulation of genes that promote tumor progression.

This approach presents a new frontier in cancer research and is different from the typical search for small molecules that target cellular kinases.

The synthetic molecule that the paper describes - HBS 1 - is based on chemically stabilized secondary structure of a protein that is mimicking specific fold, called α-helix ,- and shows outstanding potential for suppression of tumor growth. This compound was specifically designed to interrupt the type of molecular conversation within cell (called cell signaling) that promotes growth of cancer cells. Creation of HBS 1 required a method for locking correct helical shapes in synthetic strings of amino acids - a method previously developed at NYU.

The studies conducted at NYU and USC show that the molecule disrupted the cancer cell signaling network and reached the correct target in the cell to provide a rapid blockade of tumor growth. Importantly, the compounds did not show any signs of toxicity or negative impact in the test host.

While the in vivo experiments in this research were conducted using renal carcinoma cells, the principles of this design are applicable to many human conditions, including other cancers, cardiovascular diseases, and diabetic complications. The general concept of the study, the interruption of the connection between genes as they conspire to promote cancer growth, is general and applicable to the protein cell to protein cell "conversations" implicated in a host of human diseases.

This project required the interdisciplinary collaboration between chemists, biologists, pharmacologists and NMR spectroscopists both at NYU and USC. Contributors include Brooke Bullock Lao, Laura Henchey and Nathaniel Traaseth, and Paramjit Arora of NYU Chemistry Department and Swati Kushal, Ramin Dubey, Hanah Mesallati, and Bogdan Olenyuk from USC.

This research was supported by an NSF CAREER Award to Bogdan Olenyuk, and an NIH R01 grant to Paramjit Arora.


Posted by E.Pomales P.E date 10/07/13 06:24 AM 06:24 AM  Category General

comments (0)  
0:1381145080:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Monday, October 7, 2013

Potential new drug target for cystic fibrosis   Print



Scientists at the European Molecular Biology Laboratory (EMBL) in Heidelberg and Regensburg University, both in Germany, and the University of Lisboa, in Portugal, have discovered a promising potential drug target for cystic fibrosis. Their work, published online in Cell, also uncovers a large set of genes not previously linked to the disease, demonstrating how a new screening technique can help identify new drug targets.

Cystic fibrosis is a hereditary disease caused by mutations in a single gene called CFTR. These mutations cause problems in various organs, most notably making the lining of the lungs secrete unusually thick mucus. This leads to recurrent life-threatening lung infections, which make it increasingly hard for patients to breathe. The disease is estimated to affect 1 in every 2500-6000 newborns in Europe.

In patients with cystic fibrosis, the mutations to CFTR render it unable to carry out its normal tasks. Among other things, this means CFTR loses the ability to control a protein called the epithelial sodium channel (ENaC). Released from CFTR's control, ENaC becomes hyperactive, cells in the lungs absorb too much sodium and - as water follows the sodium - the mucus in patients' airways becomes thicker and the lining of the lungs becomes dehydrated. The only drug currently available that directly counteracts a cystic fibrosis-related mutation only works on the three percent of patients that carry one specific mutation out of the almost 2000 CFTR mutations scientists have found so far.

Thus, if you were looking for a more efficient way to fight cystic fibrosis, finding a therapy that would act upon ENaC instead of trying to correct that multitude of CFTR mutations would seem like a good option. But unfortunately, the drugs that inhibit ENaC, mostly developed to treat hypertension, don't transfer well to cystic fibrosis, where their effects don't last very long. So scientists at EMBL, Regensburg University and University of Lisboa set out to find alternatives.

"In our screen, we attempted to mimic a drug treatment," says Rainer Pepperkok, whose team at EMBL developed the technique, "we'd knock down a gene and see if ENaC became inhibited."

Starting with a list of around 7000 genes, the scientists systematically silenced each one, using a combination of genetics and automated microscopy, and analysed how this affected ENaC. They found over 700 genes which, when inhibited, brought down ENaC activity, including a number of genes no-one knew were involved in the process. Among their findings was a gene called DGKi. When they tested chemicals that inhibit DGKi in lung cells from cystic fibrosis patients, the scientists discovered that it appears to be a very promising drug target.

"Inhibiting DGKi seems to reverse the effects of cystic fibrosis, but not block ENaC completely," says Margarida Amaral from the University of Lisboa, "indeed, inhibiting DGKi reduces ENaC activity enough for cells to go back to normal, but not so much that they cause other problems, like pulmonary oedema."

These promising results have already raised the interest of the pharmaceutical industry and led the researchers to patent DGKi as a drug target, as they are keen to explore the issue further, searching for molecules that strongly inhibit DGKi without causing side-effects.

"Our results are encouraging, but these are still early days," says Karl Kunzelmann from Regensburg University. "We have DGKi in our cells because it is needed, so we need to be sure that these drugs are not going to cause problems in the rest of the body."

The search for genes that regulate ENaC was undertaken as part of the EU-funded TargetScreen2 project.

Joana Almaça et al. High-Content siRNA Screen Reveals Global ENaC Regulators and Potential Cystic Fibrosis Therapy Targets. Published online in Cell on 12 September 2013. DOI: 10.1016/j.cell.2013.08.045.



Posted by E.Pomales P.E date 10/07/13 06:22 AM 06:22 AM  Category Pharma Research & Development

comments (0)  
0:1381145012:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Monday, October 7, 2013

Drug patch treatment sees new breakthrough   Print



An assistant professor with the Virginia Tech - Wake Forest School of Biomedical Engineering has developed a flexible microneedle patch that allows drugs to be delivered directly and fully through the skin. The new patch can quicken drug delivery time while cutting waste, and can likely minimize side-effects in some cases, notable in vaccinations and cancer therapy.

News of the delivery technology was published in a recent issue of the scientific journal, Advanced Materials.

Leading development of the flexible patch was Lissett Bickford, now an assistant professor and researcher of biomedical engineering and the mechanical engineering, both part of the Virginia Tech College of Engineering. Work on the technology was completed while Bickford was a post-doctoral research associate at the University of North Carolina Chapel Hill.

Microneedle patch technology used on the skin has existed for several years, each patch containing an array of hundreds of micron-sized needles that pierce the skin and dissolve, delivering embedded therapeutics. However, because of their rigid chemical makeup, the patches proved difficult in fully piercing into the skin, creating a waste of drug material and a slowed delivery time. Additionally, the patches also have been difficult to produce in bulk; typical fabrication procedures have required centrifugation.

Bickford, with her research team, including Chapel Hill graduate student Katherine A. Moga, were able to develop a new flexible microneedle patch that forms to the skin directly - think a regular household bandage - and then fully pierces the skin and dissolves. Bickford said the softer, more malleable and water-soluble material also allows for more precise control over the shape, size, and composition of the patch, with little to no waste.

The nanoparticle, micro-molding patch is based on Particle Replication In Non-wetting Templates (PRINT for short) technology, developed by University of North Carolina researcher and professor Joseph DeSimone. Unlike other methods for making these patches, the new technology allows for quicker and greater wide-scale production, reducing related costs.

Research and work on the new patch was funded by the National Institutes of Health and Chapel Hill's University Cancer Research Fund. Advanced Materials wrote of the breakthrough in its July issue.


Posted by E.Pomales P.E date 10/07/13 06:21 AM 06:21 AM  Category Pharma Research & Development

comments (0)  
0:1381144947:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Monday, October 7, 2013

Pfizer statement on PhRMA-EFPIA principles for responsible clinical trial data sharing   Print



The Pharmaceutical Research and Manufacturers of America (PhRMA) and the European Federation of Pharmaceutical Industries and Associations (EFPIA) have published new commitments for responsible data sharing practices by biopharmaceutical companies. The new commitments are designed to expand access to clinical trial data and advance science. They propose ways of sharing data that will protect patient privacy, respect the regulatory process, and preserve incentives to undertake novel medical research for the benefit of patients. We believe the solutions offered by PhRMA and EFPIA provide a responsible alternative to other approaches currently being discussed in Europe.

Pfizer has been an active partner and supporter in the development of these industry commitments, which we fully support and will adhere to in our policies and practices. We have been and continue to be aligned with the actions outlined in the commitments as part of our ongoing efforts to optimize the use of our clinical data to further medical research and improve the quality of health care. Many of our practices already meet or exceed the standards established by PhRMA and EFPIA.

Pfizer participates in numerous data sharing initiatives, publishes all clinical trial results within 18 months of study completion, and provides existing clinical data in response to legitimate requests from researchers and regulators.

We also have been a leader in making results available and accessible to clinical trial participants and will continue to expand our sharing of information in meaningful ways to inform and empower patients.

We are currently reviewing our policies to ensure they fully reflect the new commitments. An update to our public disclosure policy will be issued later this year. We welcome this opportunity to reaffirm Pfizer's dedication to ensuring that the data collected in the trials we conduct are appropriately used in the service of public health.

About Pfizer Inc.
At Pfizer, we apply science and our global resources to bring therapies to people that extend and significantly improve their lives. We strive to set the standard for quality, safety and value in the discovery, development and manufacture of health care products. Our global portfolio includes medicines and vaccines as well as many of the world's best-known consumer health care products. Every day, Pfizer colleagues work across developed and emerging markets to advance wellness, prevention, treatments and cures that challenge the most feared diseases of our time. Consistent with our responsibility as one of the world's premier innovative biopharmaceutical companies, we collaborate with health care providers, governments and local communities to support and expand access to reliable, affordable health care around the world. For more than 150 years, Pfizer has worked to make a difference for all who rely on us.



Posted by E.Pomales P.E date 10/07/13 06:18 AM 06:18 AM  Category Pharma Industry

comments (0)  
0:1381144823:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Monday, October 7, 2013

International Atherosclerosis Society and Pfizer Independent Grants for Learning & Change collaborate on new grant opportunity   Print



The International Atherosclerosis Society (IAS) and Pfizer Independent Grants for Learning & Change (IGLC) have announced their collaboration on a new grant opportunity focused on improving care for patients around the world with medium or high levels of cardiovascular risk, with a particular focus on dyslipidemia.

The Request for Proposals (RFP) being issued by both organizations is intended to encourage organizations to submit concepts and ideas for design and implementation of scalable, sustainable programs for healthcare providers and patients in developing countries, designed to improve the management of dyslipidemia and other cardiovascular risk factors.

The IAS is an international federation of 64 national and regional societies whose basic missions are to promote the scientific understanding of the etiology, prevention, and treatment of atherosclerosis. The IAS exists to coordinate the exchange of scientific information among its member societies, to foster research into the development of atherosclerosis and related cardiometabolic diseases, and to help translate this knowledge into improving the effectiveness of programs designed to prevent and treat this disease. As such, the IAS is able to create partnerships worldwide, especially in those areas of the world where the epidemic of atherosclerosis and its related diseases is exploding and thus meet the growing needs in countries in Central and South America, in Eastern Europe, in Africa, in the Gulf Region and in South and South-East Asia.

The mission of the IGLC is to accelerate the adoption of evidence-based innovations that align the mutual interests of patients, healthcare professionals, and Pfizer, through support of independent professional education activities. The term "independent" means the initiatives funded by Pfizer are the full responsibility of the recipient organization. Pfizer has no influence over any aspect of the initiatives, and only asks for reports about the results and impact of the initiatives which it may share publicly.

Comprehensive management of lipids is increasingly recognized as an integral component of cardiovascular risk reduction. However, there remains a wide gap that separates treatment recommendations and real-world lipid management. This is especially evident in economically developing countries.

According to the World Health Organization, cardiovascular disease accounted for nearly 1 of every 3 deaths in 2004, and approximately 80% of these deaths occurred in low- and middle-income countries.[1] In 2010 the Institute of Medicine issued a report, commissioned by the National Heart, Lung, and Blood Institute, to address the increasing burden of cardiovascular disease in developing countries.[2] This report identified numerous barriers to the control of global cardiovascular disease, as well as specific recommendations to increase investment and implementation of cardiovascular disease prevention and management efforts in low- and middle-income countries. Crucial to these efforts are an increased awareness of chronic diseases as a public health priority and coordination among global, national, and regional stakeholders to strengthen healthcare systems.

This RFP is being issued today by both organizations. The IAS is the lead organization for review and evaluation of applications. A review committee, led by the IAS, will ultimately make decisions on which proposals will receive funding. Grant funding will be provided by Pfizer. Collectively, up to $2 million is available for award. Initially, project concepts and ideas must be submitted as "Letters of Intent" to the Pfizer website by the deadline of October 31, 2013 and must be a maximum of 3-pages in length.

Only the member societies of the IAS or their partner organizations may submit Letters of Intent to this RFP. Partnering and collaboration is strongly encouraged.

Organizations interested in responding to this RFP should reach out to the IAS member society in their country or region of the world and submit a collaborative proposal. Similarly, IAS member societies interested in responding to this RFP should bring into their project appropriate partner organizations such as academic medical centers, hospitals or healthcare systems, and other societies or associations.

For further information about the RFP please email RFP@athero.org orIGLC@Pfizer.com or visit the organization websites below.

For the IAS, please visit the website: http://athero.org

For Pfizer IGLC, please visit the website:http://www.pfizer.com/responsibility/grants_contributions/independent_grants

Pfizer Inc.: Working together for a healthier world™
At Pfizer, we apply science and our global resources to bring therapies to people that extend and significantly improve their lives. We strive to set the standard for quality, safety and value in the discovery, development and manufacture of health care products. Our global portfolio includes medicines and vaccines as well as many of the world's best-known consumer health care products. Every day, Pfizer colleagues work across developed and emerging markets to advance wellness, prevention, treatments and cures that challenge the most feared diseases of our time. Consistent with our responsibility as one of the world's premier innovative biopharmaceutical companies, we collaborate with health care providers, governments and local communities to support and expand access to reliable, affordable health care around the world. For more than 150 years, Pfizer has worked to make a difference for all who rely on us.

1. World Health Organization (WHO). Cardiovascular Diseases (CVDs). Fact Sheet No. 317. Geneva, Switzerland: WHO; 2011. http://www.who.int/mediacentre/factsheets/fs317/en/print.html. Accessed May 16, 2012.


Posted by E.Pomales P.E date 10/07/13 06:17 AM 06:17 AM  Category Pharma Industry

comments (0)  
0:1381144697:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Monday, October 7, 2013

AstraZeneca and Merck enter licence agreement for investigational oral WEE1 kinase inhibitor therapy for cancer   Print



AstraZeneca announced that MedImmune, its global biologics research and development arm, has entered into a definitive agreement to acquire Amplimmune, a privately-held, Maryland, US-based biologics company focused on developing novel therapeutics in cancer immunology.

WEE1 helps to regulate the cell-division cycle. The WEE1 inhibitor MK-1775 is designed to cause certain tumour cells to divide without undergoing the normal DNA repair processes, ultimately leading to cell death. Preclinical evidence suggests that the combination of MK-1775 and DNA damage-inducing chemotherapy agents can enhance anti-tumor properties, in comparison to chemotherapy alone.

Under the terms of the agreement, AstraZeneca will pay Merck a $50 million upfront fee. In addition Merck will be eligible to receive future payments tied to development and regulatory milestones plus sales-related payments and tiered royalties. AstraZeneca will be responsible for all future clinical development, manufacturing and marketing.

"MK-1775 is a strong addition to AstraZeneca's growing oncology pipeline, which already includes a number of inhibitors of the DNA damage response," said Susan Galbraith, Head of AstraZeneca's Oncology Innovative Medicines Unit. "The compound has demonstrated encouraging clinical efficacy data and we intend to study it in a range of cancer types where there is a high unmet medical need."

"Merck is committed to advancing potentially meaningful therapeutic options promptly for patients with cancer," said Iain D. Dukes, senior vice president and head of licensing and external scientific affairs at Merck. "We are pleased to enter this agreement with AstraZeneca to realise the potential of MK-1775 while we focus on advancing our later stage oncology programs, MK-3475 and vintafolide."

The agreement is contingent on expiration or termination of the waiting period under the Hart Scott-Rodino Antitrust Improvement Act.

About MK-1775
WEE1 is a cell cycle checkpoint protein regulator. Preclinical data indicate that disruption of WEE1 may enhance the cell killing effects of some anticancer agents. MK-1775 is an investigational orally available inhibitor of the cell cycle checkpoint protein WEE1. MK-1775 is being evaluated in Phase IIa clinical trials for the treatment of patients with P53-deficient ovarian cancer.

About Merck
Today's Merck is a global healthcare leader working to help the world be well. Merck is known as MSD outside the United States and Canada. Through our prescription medicines, vaccines, biologic therapies, and consumer care and animal health products, we work with customers and operate in more than 140 countries to deliver innovative health solutions. We also demonstrate our commitment to increasing access to healthcare through far-reaching policies, programs and partnerships.

About AstraZeneca
AstraZeneca is a global, innovation-driven biopharmaceutical business that focuses on the discovery, development and commercialisation of prescription medicines, primarily for the treatment of cardiovascular, metabolic, respiratory, inflammation, autoimmune, oncology, infection and neuroscience diseases. AstraZeneca operates in over 100 countries and its innovative medicines are used by millions of patients worldwide.


Posted by E.Pomales P.E date 10/07/13 06:15 AM 06:15 AM  Category Pharma Industry

comments (0)  
0:1381144581:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Monday, October 7, 2013

Stem cells engineered to become targeted drug factories   Print



A group of Brigham and Women's Hospital, and Harvard Stem Cell Institute researchers, and collaborators at MIT and Massachusetts General Hospital have found a way to use stem cells as drug delivery vehicles. The researchers inserted modified strands of messenger RNA into connective tissue stem cells - called mesenchymal stem cells - which stimulated the cells to produce adhesive surface proteins and secrete interleukin-10, an anti-inflammatory molecule. When injected into the bloodstream of a mouse, these modified human stem cells were able to target and stick to sites of inflammation and release biological agents that successfully reduced the swelling.

"If you think of a cell as a drug factory, what we're doing is targeting cell-based, drug factories to damaged or diseased tissues, where the cells can produce drugs at high enough levels to have a therapeutic effect," said research leader Jeffrey Karp, PhD, a Harvard Stem Cell Institute principal faculty member and Associate Professor at the Brigham and Women's Hospital, Harvard Medical School, and Affiliate faculty at MIT.

Karp's proof of concept study, published in the journal Blood, is drawing early interest from biopharmaceutical companies for its potential to target biological drugs to disease sites. While ranked as the top sellers in the drug industry, biological drugs are still challenging to use, and Karp's approach may improve their clinical application as well as improve the historically mixed, clinical trial results of mesenchymal stem cell-based treatments.

Mesenchymal stem cells have become cell therapy researchers’ tool of choice because they can evade the immune system, and thus are safe to use even if they are derived from another person. To modify the cells with messenger RNA, the researchers used the RNA delivery and cell programming technique that was previously developed in the MIT laboratory of Mehmet Fatih Yanik, PhD. This RNA technique to program cells is harmless, as it does not modify the cells' genome, which can be a problem when DNA is used (via viruses) to manipulate gene expression.

"This opens the door to thinking of messenger RNA transfection of cell populations as next generation therapeutics in the clinic, as they get around some of the delivery challenges that have been encountered with biological agents," said Oren Levy, PhD, co-lead author of the study and Instructor of Medicine in Karp's lab. The study was also co-led by Weian Zhao, PhD, at University of California, Irvine who was previously a postdoctoral fellow in Karp's lab.

One such challenge with using mesenchymal stem cells is they have a "hit-and-run" effect, since they are rapidly cleared after entering the bloodstream, typically within a few hours or days. The Harvard/MIT team demonstrated that rapid targeting of the cells to the inflamed tissue produced a therapeutic effect despite the cells being rapidly cleared. The scientists want to extend cell lifespan even further and are experimenting with how to use messenger RNA to make the stem cells produce pro-survival factors.

"We're interested to explore the platform nature of this approach and see what potential limitations it may have or how far we can actually push it," Zhao said. "Potentially, we can simultaneously deliver proteins that have synergistic therapeutic impacts."

The research was a highly collaborative effort. In addition to Karp, Levy, and Zhao, collaborators included co-corresponding author Yanik, and Harvard Stem Cell Institute Affiliated Faculty member Charles Lin, PhD, at Massachusetts General Hospital.

The work was supported by the National Institutes of Health, the American Heart Association, and a Prostate Cancer Foundation Challenge Award.


Posted by E.Pomales P.E date 10/07/13 06:14 AM 06:14 AM  Category Pharma Industry

comments (0)  
0:1381144505:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Monday, October 7, 2013

Qingdao’s Eccentric Fortune Tower Tops Out   Print



Line

Exterior rendering of the mixed-use Fortune Center, in Quingdao, China
The mixed-use Fortune Center, in Quingdao, China, will transition from an orthogonal floor plan to a curved structure that seems to embrace the sea. Pei Partnership Architects

A gracefully curved landmark structure in a key industrial city in China required a robust structural engineering solution to reach new heights.

October 1, 2013—Although, facade work continues, the structure has been completed for the Fortune Tower in Qingdao, China, a dramatic landmark structure that stretches 242 m to the sky. The tower is a hybrid structure—the first 25 stories are orthogonal, while the top 39 are elegantly arced and angled to "embrace” the nearby Yellow Sea.

The building was designed by C.C. Pei of Pei Partnership Architects, in New York City, and will be one of the tallest buildings in the city of 8.7 million residents in eastern China. Qingdao boasts a large port that supports a robust industrial economy. On the basis of successful collaboration on past projects, Weidlinger Associates, Inc., of New York City, was called upon to develop the structural engineering solutions for the complex tower.

"They asked if we could do it,” recalls Tian-Fang Jing, P.E., a principal of Weidlinger Associates. "I looked at it. It’s a very interesting project; it’s a difficult project. But as an engineer, we always say, ‘Yes, we can do it.’ ”

The tower’s stunning transition from a square stone-facade base to the gleaming upper, curved floors address the developer’s need for a mixed-use structure. The lower two floors will house retail spaces. Office space is planned for floors 3 through 23. The top floors are slated for a luxury hotel or apartments, each two levels high. 

"All the units face the ocean and the beach,” Jing says. "The corridor is on the back side.”

The unique shape of the tower presented a daunting structural engineering challenge. Because the cast-in-place concrete core doesn’t extend throughout the full height of the tower, the engineers had to design a transfer structure on the 25th floor to handle the gravity loads and the lateral forces from the floors above.

"If you look at the shape, it’s an eccentric structure,” Jing says. "The torsional effect is so [great]. We did wind tunnel studies. It’s a very complicated shape. At the corner in the back side, you have very large wind forces. In U.S. standards, at the top it’s 60 psf at the corner.”

The transfer floor is massive and robust. The floor is 3.74 m tall, with interior main ribs 500 mm thick from top to bottom. Between the main ribs are secondary ribs that are 300 mm thick. The ribs match the structural elements both above and below the floor. All of this is sandwiched between a pair of 250 mm thick transfer slabs. 

Revised exterior rendering of the Fortune Center displaying stone facade lower floors, and gleaming metal and glass at the upper, parabolic levels

The stone facade of the more traditionally designed lower floors 
gives way to gleaming metal and glass at the upper, parabolic 
levels. Pei Partnership Architects

The transfer floor added 20 percent to the cost of the structure, and even once the solution was developed, engineers still had a significant hurdle in their path. Chinese building standards allow transfer floors only up to floor 7 in the region’s seismic zone 6. Because the transfer floor in the Fortune Tower was on floor 25, the team faced a stringent design review.

"If you exceed code limits, then they always have special review groups—experts from design firms, professors from universities, experts in certain fields, construction experts,” Jing says. "They will review your design. Once they review, they give comments or they say, ‘No, this is no good. You have to change it.’ The review board asks you to do a very detailed analysis for this transfer floor. A detailed finite element model was prepared for this special floor to show where the stress concentrations are and how you solved these stress concentrations.”

Jing says that because the eccentric shape of the building produces uneven loads, some of the corner columns in the base are under heavier loads than others. Those columns were bolstered from 1.1 m square to 1.4 m square and made from a composite of steel and concrete. The largest columns have steel wide-flange shapes approximately 500 mm by 500 mm and as thick as 76 mm.

At the top of the tower, strength is provided by smaller cast-in-place concrete elevator cores on either side of the wing.

The building employs a mat foundation on a rock formation, with a five-level basement for parking.

Weidlinger was the design structural engineer on the project, with the local design institute in Qingdao serving as the engineer of record. Construction began in 2009, and current plans call for the building to be complete in late 2014.


 


Line

Posted by E.Pomales P.E date 10/07/13 06:10 AM 06:10 AM  Category Civil Engineers

comments (0)  
0:1381144305:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Monday, October 7, 2013

Experts Seek to Decrease Concrete’s Carbon Footprint   Print




Concrete, Buffalo City Court Building
Experts in the design and construction professions are making headway toward accurately determining the carbon footprint of concrete used in buildings—and in devising methods to reduce that footprint. Wikimedia Commons/Fortunate4now

The next frontier in sustainable construction may be a process known as carbon accounting of concrete—the method of developing standards for measuring the carbon footprint of concrete and developing strategies to reduce that footprint.

October 1, 2013—According to a 2011 report published by the Concrete Sustainability Hub at the Massachusetts Institute of Technology, buildings are responsible for 41 percent of the United States’s total primary energy consumption and 39 percent of its total CO2 emissions. So it is not surprising that nearly every discipline involved in designing and constructing buildings has been discussing the notion of how to reduce the carbon footprint of those buildings. Until now, the focus has been primarily on getting those structures to operate with greater energy efficiency—it’s the bread and butter of the U.S. Green Building Council’s Leadership in Energy and Environmental Design (LEED) rating system. 

But now the focus is shifting toward addressing the carbon embodied in the construction materials and construction processes themselves—what Don Davies, P.E., S.E., a senior principal of the Seattle-based structural engineering firm Magnusson Klemencic Associates (MKA), described as the "critical and next frontier in the quest for carbon-footprint reduction in the building industry,” in his paper his paper "Climate-Conscious Building Design: New Approaches To Embodied-Carbon Optimization,” published in Trim Tab, the online magazine for the International Living Futures Institute. 

"Embodied carbon is a great step in what structural engineers can do to be more relevant” to making buildings more energy efficient, Davies says. "If I reduce the structure carbon footprint by 50 percent, that’s the equivalent of 6 to 7 years of building operations.” 

MKA’s research suggests that structures account for the largest share of embodied carbon in any project, typically between 28 and 33 percent. Davies says the firm conducted a study on a 24-story hotel in Seattle, and determined that optimizing the concrete could reduce the building’s embodied carbon footprint by 50 percent—primarily by addressing where the concrete is made and what energy source is utilized at the point of fabrication. 

Now a new process called carbon accounting of concrete—the process of developing standards of measuring the carbon footprint of concrete and developing strategies to reduce that footprint —is gaining traction. Four years ago, a group of experts—including architecture and engineering firms, a general contractor, a material manufacturer, and a concrete supplier—formed the Carbon Leadership Forum (CLF), to determine best practices for measuring carbon in concrete and to push for the adoption of low-emitting concrete 

CLF is operated from the College of Built Environment at the University of Washington, and one of its founding members is Phil Williams, P.E., LEED-AP, a vice president of Webcor Builders of San Francisco. In written responses to questions posed by Civil Engineering, Williams said, "We all recognized the GHG [greenhouse gas] accounting was just the start of a process to better define, mitigate, and report the environmental impacts and properties for building materials,” Williams said. "Concrete just happened to be the first one that we elected to address.” 

Andrew Deitz, the vice president of business development for Climate Earth, a Berkeley-based firm that, among other services, verifies environmental product declarations (EPDs) from product producers based on life cycle assessment data, says there is now "significant momentum” around providing EPDs for concrete. Deitz cites three reasons: a growing demand for green construction projects, a growing demand for actual proof of environmental claims, and advancements in the technologies used to verify EPDs, including those used for concrete. The shift, he says, is from tens of products with EPD labels to thousands. 

As Deitz points out, concrete has a relatively small number of ingredients—around two dozen—which can yield as many as 1,000 different mixes. So it is relatively easy to design a concrete mix specifically to reduce its carbon footprint. 

Kathrina Simonen, R.A., S.E., LEED-AP, M.ASCE, an assistant professor at the University of Washington who oversees the work of the CLF, agrees. "Concrete has the most potential to use this data to change and reduce environmental impacts, because we can make new concrete mixes just by developing them,” she says. "You don’t have to develop a new manufacturing facility,” she explains. "You just have to mix a different mix.” 

The key ingredient, of course, is cement, and it is usually the most resource-intensive. Addressing carbon emissions requires either switching to renewable energy like wind or hydroelectric in the cement manufacturing process, or using other cementitious products—fly ash or slag, for example. Davies notes that improving quality control at batch plants can reduce cement content without reducing concrete strength—a win from an environmental perspective. Ryan Henkensiefken, P.E., LEED-AP, M.ASCE, the business development manager for San Jose-based Central Concrete, a U.S. Concrete Company, notes that concrete produced with portland cement has higher emissions than concrete produced with fly ash or slag, even if the replacement materials must be transported across long distances. 

And measuring embodied carbon is growing more precise. San Jose-based Central Concrete is the first manufacturer to provide EPDs for specific concrete mixes. The company turned to Climate Earth to help develop its accounting system and allow it to be applied across its entire product line. Central Concrete’s process measures raw material supply, including extraction, handling and processing; transportation; manufacturing; and water use in mixing concrete. 

Currently, Central Concrete has 1,400 EPDs that have been validated through a third party, with another 1,500 undergoing validation now. The company’s standard concrete mixes utilize 50 percent cement replacement materials, and the company has delivered mixes with up to 75 percent replacement materials. The 50-percent mixes deliver approximately a 30 to 35 percent carbon reduction over traditional portland cement mixes, Henkensiefken said. 

MKA uses a six-step process that begins by utilizing building information modeling (BIM), which helps to accurately establish the material quantity for a building’s structural frame. From there, the source of the material being used is determined, as well as travel times and shipping methods, the energy required to produce the material, and the carbon footprint of the energy source—all of which yield the overall carbon footprint. 

Williams points to new product category rules for concrete sponsored by the National Ready Mixed Concrete Association as a sign that there is "significant movement towards a common set of rules for reporting the environmental factors for different concrete mixes, just as we currently report engineering factors such as concrete strength, water-cementitious material ratios, and slump.” 

Further, LEED version 4, due out later this fall, will award two points for just for having EPDs, even if they are not particularly flattering. Transparency, Simonen says, is a first step forward. "You have to encourage people to share their information rather than punishing them because it might not be the best,” she explains. "It’s good for everyone to share their information.” 

When asked if carbon accounting of concrete is merely a passing trend or a staple of environmental accounting for the future, Williams points to the new LEED standard. (See "USGBC Beta Tests LEED Version 4” on Civil Engineering online.) "The question of whether EDPs will be the future or a fad has been answered, and that unequivocal answer is [that it is] the future for materials—and many would say [it] is actually the present.” 

And to move that trend along, the carbon footprint of concrete has to be as readily identifiable a metric as its strength or workability. "There are some clients who might be willing to pay slightly more money to have a lower-impact concrete,” says Simonen. "So you could add an environmental footprint threshold to your specifications and be willing to pay more for a low environmental footprint threshold.” It would be the same, she says, as paying more for higher-strength concrete. 

"Theoretically this could help move the market,” she continues. "It could help spur innovation, give people competitive advantages if they can think of less expensive ways to make low-impact concrete.” 

But Davies and Simonen suggest that in this case, it’s important not to let perfection become the enemy of the good. "Most of the data is not presented with its expected error [margin], so it sounds like a very precise number—but it’s not that precise,” Simonen says. "I think that there’s a risk that people could be discouraged by that. [But] I think the benefit is more in terms of understanding relative impacts and working to reduce them. We don’t need to take 50 years to get exact numbers,” she adds. "We just need to make reductions.” 


"If we’re going to use carbon accounting in a meaningful way,” says Davies, "you need to be able to get the information quickly and easily. I really don’t care about absolute numbers. I care about relative numbers. The absolute number of carbon is not the point. The point is, use less.” 

Embodied carbon, ultimately, may prove to be low-hanging fruit along the path of real sustainability. Tougher questions about land use planning and the planned longevity of buildings also need to be asked. 

Still, there is a "general interest in the structural engineering community to do the right thing,” says Davies. "If there’s a way to do the right thing and make a difference, people are supportive of it. But it has to be kept practical and simple enough so it’s worth the effort.”


Posted by E.Pomales P.E date 10/07/13 06:06 AM 06:06 AM  Category Civil Engineers

comments (0)  
0:1381144134:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Sunday, October 6, 2013

FDA Drug Safety Communication: FDA warns of increased risk of death with IV antibacterial Tygacil (tigecycline) and approves new Boxed Warning   Print



FDA Drug Safety Communication: FDA warns of increased risk of death with IV antibacterial Tygacil (tigecycline) and approves new Boxed Warning

This update is in follow-up to the FDA Drug Safety Communication: Increased risk of death with Tygacil (tigecycline) compared to other antibiotics used to treat similar infections issued on September 1, 2010.

Safety Announcement

[9-27-2013]  The U.S. Food and Drug Administration (FDA) is warning that an additional analysis shows an increased risk of death when intravenous (IV) Tygacil (tigecycline) is used for FDA-approved uses as well as for non-approved uses.  As a result, we approved a new Boxed Warning about this risk to be added to the Tygacil drug label and updated the Warnings and Precautions and the Adverse Reactions sections.  A Boxed Warning is the strongest warning given to a drug.  These changes to the Tygacil label are based on an additional analysis that was conducted for FDA-approved uses after issuing a Drug Safety Communication(DSC) about this safety concern in September 2010.

Health care professionals should reserve Tygacil for use in situations when alternative treatments are not suitable.  Tygacil is FDA-approved to treat complicated skin and skin structure infections (cSSSI), complicated intra-abdominal infections (cIAI), and community-acquired bacterial pneumonia (CABP).  Tygacil is not indicated for treatment of diabetic foot infection or for hospital-acquired or ventilator-associated pneumonia.  Patients and their caregivers should talk with their health care professionals if they have any questions or concerns about Tygacil.

In the 2010 DSC, we informed the public that a combined analysis, or meta-analysis, of 13 Phase 3 and 4 trials showed a higher risk of death among patients receiving Tygacil compared to other antibacterial drugs: 4.0% (150/3788) vs. 3.0% (110/3646) respectively.  The adjusted risk difference for death was 0.6% with corresponding 95% confidence interval (0.1, 1.2).  The increased risk was greatest in patients treated with Tygacil for ventilator-associated pneumonia, a use for which FDA has not approved the drug. 

Since issuing the 2010 DSC, we analyzed data from 10 clinical trials conducted only for FDA-approved uses (cSSSI, cIAI, CABP), including trials conducted after the drug was approved.  This analysis showed a higher risk of death among patients receiving Tygacil compared to other antibacterial drugs: 2.5% (66/2640) vs. 1.8% (48/2628), respectively.  The adjusted risk difference for death was 0.6% with corresponding 95% confidence interval (0.0%, 1.2%).  In general, the deaths resulted from worsening infections, complications of infection, or other underlying medical conditions. 

The latest Tygacil label can be accessed here.

 

Contact FDA

1-800-332-1088
1-800-FDA-0178 Fax
Report a Serious Problem

MedWatch Online

Regular Mail: Use postage-paid FDA Form 3500

Mail to: MedWatch 5600 Fishers Lane

Rockville, MD 20857

Posted by E.Pomales P.E date 10/06/13 05:09 PM 05:09 PM  Category FDA

comments (0)  
0:1381097453:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Sunday, October 6, 2013

Researchers Find Brain Activity Beyond the Flat Line   Print




Pulse trace of oscilloscope for a electro-encephalogramme (EEG)Pulse trace of oscilloscope for a electro-encephalogramme (EEG)"Fascinating,” says medical resuscitation expert Sam Parnia of a recent PLOS One study finding highly unexpected electrical activity in the hippocampus of one man, and 26 cats, with flat-lined "isoelectric” electroencephalograms (EEGs).

The isoelectric flat line—so popular in movies and on TV shows—helps determine if patients are in a brain death they can’t recover from.

"The brain may survive in deeper states of coma than the ones found during the isoelectric line,” says study lead author Florin Amzica.  Agrees Ari Joffe, a critical care physician and University of Alberta ethicist who researches this issue: "I believe this study shows the electroencephalogram, which records superficial cortical activity, does not tell us what is happening in subcortical nor brainstem areas.  (It) suggests that the limbic system, specifically hippocampus and dentate gyrus, are active when the EEG is isoelectric (no activity) in their human case, and in the cats."

Religions that tolerate clinic use of embryonic stem cells—from discarded IVF clinic embryos—tend to be those defining life’s start as a process, not a single bright line. Neuroscientists trying to nail the elusive neural correlates of thought similarly describe consciousness as a process, not a single bright line.

Increasingly, scientists are referring to the end of life that way, too. "Doctors assume that after clinical death, the brain is dead and inactive,” University of Michigan neuroscientist Jimo Borjigin said after publication of her August PNAS study finding that nine rats dying of cardiac arrest experienced brain activity for 30 seconds after death. Talking to science writer Ed Yong, she said: "They use the term ‘unconscious’ again and again. But death is a process. It is not a black or white line.”

Florin Amzica, PhDFlorin Amzica, PhDSimilarly, the recent PLOS One study found that 26 cats, placed in a flat-lining deep coma, still experienced significant brain activity in their hippocampus, the ancient seat of memory storage. This occurred when researchers added more anesthesia after the isoelectric flat line occurred. "Nobody had tried this before, probably because they ‘knew’ that there was no point in passing the flat line, thought to be the last frontier of the living brain,” says Amzica, a researcher with the University of Montreal. "Since this is false, as we know now, the first one to go beyond the flat line was doomed to find something new--as we did. But this was only possible in a brain whose flat line was not reflecting death, but silence. Not much of a linguistic difference, but quite a difference, in fact.”

The unexpected activity was so boisterous it overflowed and spilled into the outer frontal cortex—the main area of cognitive thought. Consciousness is gone when the flat line occurs in tandem with extensive brain damage, says Amzica. But when there is less brain damage, neuroprotection may be going on, he postulates.

Regardless of the reason for it, the startling amount of brain activity—which Amzica’s team calls "the Nu-complex state”—surprised many. It has been prompting, with the other study, revival of talk about that flat line, and whether it truly represents death.

"We think the Nu-complex state has a greater neuroprotective potential than the ones used up to now by generating a periodic pattern of synaptic activities that could maintain a rudiment of cortical functionality, certainly more efficient that the one encountered during the flat EEG,” says Amzica, referring to the current practice of placing some patients--on ventillators, for example--in artificial comas to protect them. "But this requires further study. The revisiting of brain death criteria would also be refreshing. I do not believe we bring anything that questions the present criteria used in clinical practice. However, whenever the present set of tests are not available, the attempt to induce the Nu-complex state might be useful in determining whether that brain is still functional or not.”

Scientists have speculated for a while that the birth of consciousness is far messier than it appears. Now some researchers say the same may be true of the end of consciousness.

Parnia is chief of resuscitation medicine at Stony Brook University Hospital, and head of the Human Consciousness Project’s AWARE study which scientifically documents after-death experiences in 25 hospitals in North America and Europe. In his 2013 bookErasing Death, he repeatedly refers to death as a process, "not a moment in time.” Among the reasons: brain cells can remain viable for up to eight hours after blood flow stops.

Parnia’s group has found that, of the ten percent of people revived after being declared dead post-cardiac arrest, two to three percent report an ability to recall things they shouldn’t, like conversations held by clinicians when the patients were flat-lined. Increasing studies like these indicate that "eventually we may have to relook at death," Parnia says.

He actually believes the cat study says more about life than death, since the feline patients weren't dead, just in isoelectric comas with their hearts still beating, when their hippocampal regions were jolted into activity. "The study is fascinating because it implies drugs like the anesthetic used may be able to stimulate people out of deep coma," he says. "There may be a real application there."

Some others interpret the cat study as elucidating the power of drugs to mask, or tamp down, electrical activity in the brain. 

Regardless, "an understanding has gradually been accumulating over decades that, in the early stages of death in the traditional sense of the word, all brain cells are not instantly annihilated, consciousness is not instantly annihilated," Parnia says. "This in itself is significant." 


Posted by E.Pomales P.E date 10/06/13 06:00 AM 06:00 AM  Category Biotechnology Engineering

comments (0)  
0:1381057291:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Sunday, October 6, 2013

Science Finds “Home” of Imagination   Print




Dartmouth graduate student Alex Schlegel reviews brain scans to elucidate the changes in white matter during the process of learning. (Source: Dartmouth College, photo by Eli Burak ’00)Dartmouth graduate student Alex Schlegel reviews brain scans to elucidate the changes in white matter during the process of learning. (Source: Dartmouth College, photo by Eli Burak ’00)Science may have found the home of humanity’s most distinctive trait: imagination.

That "home” is actually a network all over the brain, supporting the idea that imagination and other key cognitive processes are born in a "mental workspace” of many neural regions firing together, not isolated regions firing alone, according to a new Proceedings of the National Academy of Science (PNAS) study.

The study is attracting attention.

"An interesting advance on existing research developments,” says Bernard Baars, a pioneer of the "mental workspace” theory, and former Senior Fellow at the La Jolla Neurosciences Institute. He was not involved in the study.

"A terrific article,” says Georgetown University neuroscientist Adam Green, who studies neural correlates of analogical reasoning, and who was also uninvolved.

"A terrific article,” says Georgetown University neuroscientist Adam Green, who studies neural correlates of analogical reasoning, and who was also uninvolved.

"We have gone through an era where we have been fascinated at seeing how various areas of the brain are activated in perceptual and cognitive processes,” says the University of California, Santa Barbara’s Michael Gazzaniga, viewed by many as founder of cognitive neuroscience. Gazzaniga edited the PNAS paper. "Using the new tools of network science, the authors are beginning to map out how the multiple areas interact to produce human cognition.”

In the study, subjects were placed in an fMRI. Different brain networks lit up when they were asked to simply visualize an abstract form, versus when they were asked to create or dismantle one. The ability to manipulate mental images is a key human trait, scientists have long thought.

"The human ability to flexibly combine, break apart, and otherwise modify mental images, symbols, or other ideas or concepts, seems to be central to many of our creative behaviors like the creation of art, scientific, or mathematical thought,” says Dartmouth College neuroscience graduate student Alexander Schlegel. Schlegel, with Dartmouth cognitive neuroscience professor Peter Tse, led the PNAS study. "Our lab is very interested in how the brain can do this. So in this study we asked: `How does the brain manipulate mental images?’”

To study this, "while in an fMRI scan, participants were asked to either imagine specific simple, abstract shapes, or to mentally combine them/break them apart. We found a widespread network of areas in the brain responsible for making the latter manipulations of imagery happen. This network resembles a ‘mental workspace’ that scholars theorize may be at the core of many high-level cognitive abilities that distinguish humans from other animals.”

The networks that light up during such tasks are not "states,” Schlegel cautions, as if "the brain adopts static postures. It doesn’t. That is like asking how the state of a waltz differs from the state of a salsa dance. The brain is a dynamic process.”

That said: "We saw differences in brain activity levels between the `manipulate’ and `maintain’ conditions.” There were no differences in activity levels between the two "manipulation” tasks, but "patterns of activity in several brain regions changed between the two manipulation tasks.”

Says Green: "The frontal and parietal brain regions targeted in this work are known to contribute to intelligent and creative cognition across different tasks and modalities. The type of working memory maintenance and manipulation task used in this work has been repeatedly associated with these brain regions in the fMRI literature. What's new and encouraging about this work is that the authors build on that solid foundation of "where" findings in the fMRI literature, to ask a "how" question. The brain areas in this study, and the type of cognitive task used, do not encapsulate all elements of creativity or intelligence. No single study could. But the neurocognitive operations investigated are likely necessary, if not entirely sufficient, for a great deal of intelligent cognition, so beginning to explicate the mechanistic dynamics of these operations adds real value to our present understanding.”

The study unveils "the physical evolution of thought processes," says University of Montana neurologist Charlie Gray, who showed in Science that visual memory involves synchronized brain waves. "Exciting."

Why has Schlegel’s approach not been tried before? The fMRi –and analytical techniques used – are fairly new. Also: "The field still thinks more in terms of `mental representations’—like brain states—than `mental processes,’” Schlegel says. "If your questions revolve around finding out how the brain represents this or that in a static fashion, you tend not to ask questions about how the brain transforms information over time.”

Also: "Until recently, it has been fairly taboo in psychology to study mental phenomena (consciousness, imagery, creativity). These couldn’t be directly observed, so were off limits to scientific study. Behaviorism came out of this. Its effects are still present. To some degree the idea of a `mental workspace’ doesn’t fit with models many researchers use when thinking about the organization of the brain.”

"I like the very rapid developments in consciousness science,” agrees Baars. "In the last ten years there has been an amazing wave of excellent brain studies, psychological experiments, and modeling efforts. Consciousness is a rich subject. Until recently it was a taboo in the sciences. But we now can actually look at signaling in the living brain, and make sensible predictions. All this is amazing after years of scientific neglect.”

TOPICS



Posted by E.Pomales P.E date 10/06/13 05:57 AM 05:57 AM  Category Biotechnology Engineering

comments (0) trackback URL (0)  
0:1381057126:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Sunday, October 6, 2013

IT Hiccups of the Week: Sutter Health’s $1 Billion EHR System Crashes   Print



After a torrid couple of months, last week saw a slowdown in the number of reported IT errors, miscalculations, and problems. We start off this week’s edition of IT Hiccups with the crash of a healthcare provider’s electronic health record system.

Sutter Health’s Billion Dollar EHR System Goes Dark

Last Monday, at about 0800 PDT, the nearly US $1 billion EPIC electronic health record (EHR) system used by Sutter Health of Northern California crashed. As a result, the Sacramento Business Journal reported, healthcare providers at seven major medical facilities, including Alta Bates Summit Medical Center facilities in Berkeley and Oakland, Eden Medical Center in Castro Valley, Mills Peninsula Health Services in Burlingame and San Mateo, Sutter Delta in Antioch, Sutter Tracy, Sutter Modesto and affiliated doctor’s offices and clinics, were unable to access patient medications or histories.

software patch was applied Monday night, and EHR access was restored. Doctors and nurses no doubt spent most of the day Tuesday entering in all the handwritten patient notes they scribbled on Monday.

It still is unclear whether the crash was related to a planned system upgrade that was done the Friday evening before the crash, but if I were betting, I would lay some coin on that likelihood.

Nurses working at Sutter Alta Bates Summit Hospital have been complaining for months about problems with the EHR system, which was rolled out at the facility in April. Nurses at Sutter Delta Medical Center have also complained that hospital management there has threatened to discipline nurses for not using the EHR system; its system went live about the same time as Alta Bates Summit's, but for billing for chargeable items. Sutter management said that it was unaware of any of the issues the nurses were complaining about, and that any complaints they might have lodged were the result of an ongoing management-labor dispute.

Sutter is now about midway through its EHR system roll-out, an effort it first started in 2004 at a planned cost of $1.2 billion and completion date of 2013. It later backed off that aggressive schedule, and then "jump started” its EHR efforts once more in 2007. Sutter plans to complete the roll-out across all 15 of its hospitals by 2015 at a cost now approaching $1.5 billion.

Hospital management said in the aftermath of the incident, "We regret any inconvenience this may have caused patients.” It did not express regret to its nurses, however.

Computer Issue Scraps Japanese Rocket Launch

Last Tuesday, the launch of Japan’s new Epsilon rocket was scrubbed with 19 seconds to go because a computer aboard the rocket "detected a faulty sensor reading.” The Japan Aerospace Exploration Agency (JAXA) had spent US $200 million developing the rocket, which is supposed to be controllable from conventional desktop computers instead of massive control centers. This added convenience has resulted from the extensive use of AI to self-perform status-checks.

The Japan Times reported on Thursday that the problem was traced to a "computer glitch at the ground control center in which an error was mistakenly identified in the rocket’s positioning.”

The Times stated that, "According to JAXA project manager Yasuhiro Morita, the fourth-stage engine in the upper part of the Epsilon that is used to put a satellite in orbit, is equipped with a sensor that detects positioning errors. The rocket’s computer system starts calculating the rocket’s position based on data collected by the sensor 20 seconds before a launch. The results are then sent to a computer system at the ground control center, which judges whether the rocket is positioned correctly. On Tuesday, the calculation started 20 seconds before the launch, as scheduled, but the ground control computer determined the rocket was incorrectly positioned one second later based on data sent from the rocket’s computer.”

The root cause(s) of the problem are still unknown, although it is speculated that it was a transmission issue. JAXA says that it will be examining "the relevant computer hardware and software in detail.” The Times reported on Wednesday that speculation centered on a "computer programming error and lax preliminary checks.”

JAXA President Naoki Okumura apologized for the launch failure, which he said brought "disappointment to the nation and organizations involved.” A new launch date has yet to be announced.

Nasdaq Blames Software Bug For Outage

Two weeks ago, Nasdaq suffered what it called at the time a "mysterious trading glitch.” The problem shut down trading for three hours. After pointing fingers at rival exchange NYSE Arca, it admitted last week that perhaps it wasn’t all Arca’s fault after all.

A Reuters News story quoted Bob Greifeld, Nasdaq's chief executive, as saying Nasdaq’s backup system didn’t work because, "There was a bug in the system, it didn't fail over properly, and we need to work hard to make sure it doesn't happen again.”

However, Greifeld didn’t fully let Arca off the hook. A story at the Financial Times said that in testing, Nasdaq’s Securities Information Processor (SIP), the system that receives all traffic on quotes and orders for stocks on the exchange, "was capable of handling around 500,000 messages per second containing trades and quotes. However, in practice, Nasdaq said repeated attempts to connect to the SIP by NYSE Arca, a rival electronic trading platform, and streams of erroneous quotes from its rival eroded the system’s capacity in a manner similar to a distributed denial of service attack. Whereas the SIP had a capacity of 10,000 messages per data port, per second, it was overwhelmed by up to more than 26,000 messages per port, per second.”

Nasdaq said that it was now looking at design changes to make the SIP more resilient.

A detailed report looking into the cause of the failure will be released in about two weeks or so.


Posted by E.Pomales P.E date 10/06/13 05:54 AM 05:54 AM  Category Computer Engineers

comments (0) trackback URL (0)  
0:1381056955:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Sunday, October 6, 2013

An Engineering Career: Only a Young Person’s Game?   Print



If you are an engineer (or a computer professional, for that matter), the danger of becoming technologically obsolete is an ever-growing risk. To be an engineer is to accept the fact that at some future time—always sooner than one expects—most of the technical knowledge you once worked hard to master will be obsolete.

An engineer’s "half-life of knowledge,” an expression coined in 1962 by economist Fritz Machlup to describe the time it takes for half the knowledge in a particular domain to be superseded, everyone seems to agree, has been steadily dropping. For instance, a 1966 story in IEEE Spectrum titled, "Technical Obsolescence,” postulated that the half-life of an engineering degree in the late 1920’s was about 35 years; for a degree from 1960, it was thought to be about a decade.

Thomas Jones, then an IEEE Fellow and President of the University of South Carolina wrote a paper in 1966 for the IEEE Transactions on Aerospace and Electronic Systems titled, "The Dollars and Cents of Continuing Education,” in which he agreed with the 10 year half-life estimate. Jones went on to roughly calculate what effort it would take for a working engineer to remain current in his or her field.

Jones postulated that a typical undergraduate engineer invested some 40 hours a week of study over 120 weeks in his or her degree, or about 4800 hours total. Assuming a half-life of 10 years, Jones said about 2400 hours of undergraduate knowledge has probably been superseded. To replace that obsolete knowledge and assuming there was 48 weeks a year in which to devote on knowledge replacement, Jones reasoned that an engineer would need to spend 5 hours each of those weeks gaining newtechnology, mathematics and scientific knowledge if he or she wished to remain technically current. That, of course, assumed the engineer didn’t forget any previously learned knowledge that was still relevant.

Jones emphasized in his article that,"Life-long learning of engineering is possible only by disciplined life-long study and thought.” Over a 40 year engineering career, a person would need to spend 9600 hours in study to remain current, or the time needed to earn two undergraduate degrees.

Jones hinted in his paper about the continuing issue of accelerating "knowledge decay,” which can be seen rising again as an issue in a 1991 New York Times article, "Engineer Supply Affects America.” The Times article cites the IEEE as a source when it  reported that the half-life of engineering skills at that time was now estimated to be less than 5 years, and for a software engineer, it was less than three. A few years later in 1996, Craig Barrett, president and co-founder of Intel, lent credence to that belief when he stated, "The half-life of an engineer, software or hardware, is only a few years.” In 2002, William Wulf, the president of the National Academy of Engineering, was quoted as saying that,” The half-life of engineering knowledge… is from seven to 2½ years.” More recent estimates emphasize the low end of the range, especially for those working in IT.

Philippe Kruchten, a thirty-year software engineering practitioner and manager before he became a professor of software engineering at the University of British Columbia in Vancouver, took an informal stab in 2008 at the half-life of software engineering ideas by re-examining 1988 issues ofIEEE Software and trying to see which "are still important today or at least recognizable.” Kruchten conjectured in a paper he wrote for IEEE Softwarethat the half-life of software engineering ideas is likely not much more than 5 years.

If we take Krutchen’s half-life of knowledge of 5 years estimate, and apply Jones’s formula, an engineer or IT professional today would have to spend roughly 10 hours a week studying new knowledge to stay current (orupskilling, in the current lingo). One may quibble that your study productivity is much higher than when you were in college or university, but even cutting the time needed by a quarter to 7.5 hours a week of intense study 48 weeks every year that Jones said was needed in 1966 would tax many working engineers and IT professionals today. The workload needed to keep current helps explain why the half-life of an engineer or IT professional’s career is now about 10 to 12 years or even less.

The perception of technology obsolescence especially in experienced (aka older) engineers and IT professionals is one of primary reasons that employers give in pushing for the hiring of young engineers, IT professionals and H-1B visa workers. Mark Zukerberg, CEO of Facebook, who is one ofmany high tech employers pushing for more H-1B visas, reflects the prevailing attitude when he stated both that, "Our policy is literally to hire as many talented engineers as we can find. The whole limit in the system is that there aren't enough people who are trained and have these skills today,” and "I want to stress the importance of being young and technical. Young people are just smarter. Young people just have simpler lives. We may not own a car. We may not have family. Simplicity in life allows you to focus on what's important.

As Zukerberg indicates, a highly desirable "skill” in young engineers and computer professionals is their perceived willingness to work longer and harder than older workers who usually have families, as well as their perceived willingness to relocate. Obviously, all that extra work leaves little time for disciplined study and thought to stay current beyond today's belief of "what's important."

Of course, there is also that critical matter of pay: younger engineers and IT professionals earn significantly less than one who is experienced. Given a choice, many employers would rather hire a couple of inexperienced computer programmer and spend a few months training them than hiring (or retaining) a more experienced but expensive programmer. As academic cum entrepreneur Vivek Wadhwa writes, "the harsh reality is that in the tech world, companies prefer to hire young, inexperienced, engineers” over more experienced ones.

In addition, many employers aren’t interested in providing training to engineers or programmers who may start becoming obsolete, for fear of seeing them leave or be poached for employment elsewhere. Peter Cappelli, a professor of management at Wharton and one who has published a book on employer hiring and (non)training practicesindicated to Spectrum last year that employers looking for experienced workers have fallen into the habit of poaching employees from their competitors as opposed to spending the resources to train from within their organization to meet a specific job skill.

You can further see the youth-driven dynamics at work by looking at the median age of employees at technology companies. The New York Timesrecently ran a story on a study conducted by PayScale, a company based in Seattle, which found that of the 32 most successful companies in the technology industry, only six had a median age greater than 35 years old. Eight of the companies, Payscale found, had a median age of 30 or younger. Facebook, for instance, had a median age of 28 years old, while IBM Global Services in comparison had a median age of 38 years old.

What can be done if you are an older engineer or computer professional? Well, you can look at Payscale’s companies’ median ages and try to target those where the median age is older than your age. You could do what some older engineers and computer professionals are doing in Silicon Valley, and try to hide your age by dressing younger, hide your graying hair, carry all the latest high tech gear in a backpack, and even get eyelid lifts. You could also try toincrease your business acumen to try to convince your employer you are creating shareholder value. Or you may also just decide the best option is to spend time designing an escape plan and either aim for a job in management (assuming there is one) or get into another field altogether.

The only bit of schadenfreude for older engineers and computer programmers being pushed aside for younger ones is to know that it will happen to them too, but maybe at an even younger age. After all, that is what many of those getting pushed out today themselves did to engineers and IT professionals just slightly older than themselves back in 2000.

Photo: Andrew Rich/Getty Images


Posted by E.Pomales P.E date 10/06/13 05:52 AM 05:52 AM  Category Computer Engineers

comments (0)  
0:1381056761:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Sunday, October 6, 2013

This Week in Cybercrime: NSA Wants More Info from Firms   Print



You have to give it to Gen. Keith Alexander, head of the U.S. National Security Agency (NSA). The man can stand up to abuse. He’s faced the ire of attendees at public events ever since the spy agency's monitoring U.S. citizens’ electronic communications was leaked earlier this year. The aftermath of Wednesday’s keynote address at the Billington Cybersecurity Summit, where he called upon the private sector to partner with the NSA, the FBI, the Department of Homeland Security, and the CIA to prevent or limit cybercrime was no different. He couldn’t possibly have expected any different after he said, "We need the authority for us to share [cyberattack information] with [private businesses] and them to share with us."

Despite revelations about the NSA’s activities—some that directly contradict previous government assurances about the limits of the surveillance programs—Alexander insisted that the NSA hasn’t done anything illegal. Furthermore, he said, the calls from some members of Congress to limit the reach of the NSA and the nation’s other spy and law enforcement agencies are based on what he calls sensationalized reporting. Alexander pushed for even more data access from U.S. companies. The more information companies shared with NSA the more cyberattack warnings it could supply to them.

But many observers now see that rationale as threadbare and view Alexander and his ilk with a jaundiced eye. Jerry Brito, a researcher who heads the Technology Policy Program at the Mercatus Center at George Mason University, in Virginia, told CSO that the NSA already has the authority to share data with companies. It could simply declassify information, allowing companies to use it to protect themselves. But that’s not what the agency is interested in, Brito insists. "What they really want is more information about the communications of Americans under the rubric of cybersecurity information sharing," he told CSO.

Stolen Data Clearinghouse Gets Info from Its Above-Board Counterparts

Willie Sutton’s famous response to being asked why he robbed banks—"Because that’s where the money is”—could certainly be the rationale behind a recently discovered cybercrime program targeting data brokerage firms. According to an investigative report [pdf] from security reporter Brian Krebs, an online identity theft service that specializes in selling Social Security numbers, credit and background check reports, and other information, gained access to the data by hacking into the networks of companies such as LexisNexis, Dun & Bradstreet, and an employment background screening company called Kroll Background America Inc. Botnets in the companies’ systems continually siphoned off information and passed it to servers controlled by the cybercrooks.

The criminal clearinghouse, whose website was at SSNDOB[dot]MS, had served some 1300 customers who paid hundreds of thousands of dollars to get their hands on the SSNs, birth dates, drivers license records, and the credit and background check information of more than four million U.S. residents.

Researchers Identify Source of Hit and Run Cyberattacks

Security researchers at Kaspersky Lab say they have uncovered details related to a series of "hit and run” attacks against very specific targets. In a blog poston Kaspersky’s Securelist blog, the researchers said, "We believe this is a relatively small group of attackers that are going after the supply chain—targeting government institutions, military contractors, maritime and ship-building groups, telecom operators, satellite operators, industrial and high technology companies and mass media, mainly in South Korea and Japan.”

What’s most unique about the data theft campaign, which Kaspersky calls "Icefog,” is that after the attackers get what they want, they don’t hang around, using the backdoors installed on the victims’ networks to continue exfiltrating data. They go in knowing exactly what they’re after, take only the target information, then sweep up, turn off the lights, and close the door behind them.

Kaspersky Lab said that it has observed more than 4000 unique infected IPs and hundreds of victims. Some of the companies targeted during the operation, which began in 2011, include defense industry contractors Lig Nex1 and Selectron Industrial Company, shipbuilders such as DSME Tech, and Hanjin Heavy Industries, telecommunications firms such as Korea Telecom, and even the Japanese House of Representatives and the House of Councillors.

Kaspersky has since published a full report (pdf) with a detailed description of the backdoors and other malicious tools used in Icefog, along with a list of ways to tell whether your system has been compromised. The researchers have also put up an FAQ page.

iPhone Break-ins and Countermeasures

Someone tinkering with his Apple iPhone figured out a way to bypass its lock screen, the first line of security for the gadget other than keeping it in your pocket. This week, Apple released its latest countermeasure: an iOS 7 software update that fixes the security hole that allowed an unauthorized user to access information including the handset owner’s e-mail, Twitter, Facebook, and Flickr accounts.

According to Forbes' Andy Greenberg, "swiping upwards on the lockscreen to bring up the iOS Control Center, then opening the alarm clock app, then holding down the power button to show the ‘power off’ and ‘cancel’ options, then tapping ‘cancel,’ and finally quickly double-clicking the home button to bring up the multitasking screen for various apps,” made those apps accessible.

It’s amazing what people with loads of time on their hands eventually stumble upon.

That news came the same week it was revealed that someone had found an even more involved method for fooling the iPhone 5’s fingerprint sensor. According to Marc Rogers, a researcher at the mobile security firm Lookout, it’s possible but highly unlikely that you’ll be the victim of his hack, which he detailed in a blog post ("Why I Hacked Apple’s TouchID, And Still Think It Is Awesome.”). To give you an idea of just how remote the possibility of your phone being duped using his technique, here are a few of the steps Rogers mentions: "You take the cleaned print image and without inverting it, print it to transparency film. Next, you take the transparency film and use it to expose some thick copper clad photosensitive PCB board that’s commonly used in amateur electrical projects. After developing the image on the PCB using special chemicals, you put the PCB through a process called ‘etching’ which washes away all of the exposed copper leaving behind a fingerprint mold.”

In other words, you can rest easy.

Photo:Charles Dharapak/Associated Press


Posted by E.Pomales P.E date 10/06/13 05:50 AM 05:50 AM  Category Communications Engineering

comments (0)  
0:1381056731:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Sunday, October 6, 2013

4 New Ways to Smuggle Messages Across the Internet   Print



SkyDe, StegTorrent, StegSuggest, and WiPad make hiding messages in plain sight—steganography—untraceable.

 


Their neighbors thought they were just ordinary U.S. residents, butsecretly they were spies, sent by Russia’s Foreign Intelligence Service to gather information on U.S. policies and programs. For years they thwarted detection partly by hiding secret correspondence in seemingly innocent pictures posted on public websites. They encoded and decoded the dispatches using custom-made software.

But the scheme wasn’t as covert as the spies had assumed. Eventually investigators from the U.S. Department of Justice tracked down the altered images, which helped build a case against the Russians. In June 2010, federal agents arrested 10 of them, who admitted to being secret agents a few weeks later.

The act of concealing data in plain sight is known as steganography. Since antiquity, clandestine couriers have used hundreds of steganographic techniques, including invisible ink, shrunken text, and strategically placed tattoos. Picture steganography—one of the Russian spies’ primary tactics—dates back to about the early 1990s. That they used such an old-school strategy is odd, particularly because doctored images can be detected and used as evidence.

A more modern approach, known as network steganography, leaves almost no trail [see "Vice Over IP,” IEEE Spectrum, February 2010]. Rather than embed confidential information in data files, such as JPEGs or MP3s, network steganography programs hide communication in seemingly innocent Internet traffic. And because these programs use short-lived delivery channels—a Voice over Internet Protocol (VoIP) connection, for example—the hidden exchanges are much harder to detect.

Network security experts have invented all of the dozens of publicly documented network steganography techniques. But this doesn’t mean that criminals, hackers, and spies—as well as persecuted citizens wanting to evade government censorship or journalists wanting to conceal sources—aren’t using these or similar tactics. They probably are, but nobody has tools that are effective enough to detect these techniques. In fact, had the Russian spies used newer steganography methods, they might not have been exposed so handily.

As members of the Network Security Group at Warsaw University of Technology, in Poland, we study new ways to disguise data in order to help security experts design better detection software for those cases when steganography is used for nefarious purposes. As communication technologies evolve, we and other steganographers must develop ever more advanced steganography techniques.

About a decade ago, state-of-the-art programs manipulated the Internet Protocol primarily. Today, however, the most sophisticated methods target specific Internet services, such as search tools, social networks, and file-transfer systems. To illustrate the range of things that are possible, we present four steganographic techniques we’ve recently developed, each of which exploits a common use of the Internet.

Skype

Silences in a telephone conversation can carry a great deal of meaning—and hidden messages.

Skype, Microsoft’s proprietary VoIP service, is particularly easy to exploit because of the way the software packages audio data. While a user—let’s call her Alice—is talking, Skype stuffs the data into transmission packets. But unlike many other VoIP apps, Skype continues to generate audio packets when Alice is silent. This improves the quality of the call and helps the data clear security firewalls, among other advantages.

But the outgoing silence packets also present an opportunity to smuggle secret information. These packets are easy to recognize because they’re much smaller—about half the number of bits—than the packets containing Alice’s voice.

We’ve developed a steganography program that allows Alice to identify the small-size packets and replace their contents with encrypted secret data. We call this program SkyDe, shorthand for Skype Hide. For a covert transaction to take place, the recipient of Alice’s call—let’s name him Bob—also needs to have SkyDe installed on his computer. The software intercepts Alice’s transmission, grabs some of the small packets while letting all of the big ones pass through, and then reassembles the secret message.

Meanwhile, Alice and Bob chat away as if nothing unusual were transpiring. Bob’s Skype application assumes the filched packets have simply been lost. Skype then fills the gap left by each lost packet most likely by reconstructing its contents based on the contents of its neighbors’ packets. (Because Skype is proprietary, we don’t know for sure.) As a result, the missing silence packets sound just like all the other silence packets surrounding them.

Our experiments show that up to 30 percent of Alice’s silence packets can transport clandestine cargo without causing a noticeable change in call quality. This means that Alice could send Bob up to about 2 kilobits per second of secret data—roughly 100 pages of text in 4 minutes—without arousing the suspicion of anyone monitoring their call.

BitTorrent

What better place to hide secrets than in one of the world’s most popular file-sharing systems? The peer-to-peer transfer protocol BitTorrent conveys hundreds of trillions of bits worldwide every second. Anyone sniffing for criminal correspondence on its networks would have better luck finding that proverbial needle in a haystack.

Our group developed StegTorrent for encoding classified information in BitTorrent transactions. This method takes advantage of the fact that a BitTorrent user often shares a data file (or pieces of the file) with many recipients at once.

So let’s say Alice wants to send a hidden message to Bob. First, Bob needs to have previously established control over a group of distributed computers that all run a BitTorrent application. These are most likely computers that Bob owns or, if he’s an especially savvy hacker, computers he has co-opted to do his bidding. Both he and Alice need to know how many computers are in this group and what their IP addresses are.

For simplicity’s sake, let’s say Bob controls a group of just two computers. To initiate a transaction, he commands the computers to each request a file from Alice. In a typical BitTorrent transfer, Alice’s program would transmit the data packets in random order, and Bob’s computers would stitch them back together based on the instructions they contain. Using StegTorrent, however, Alice can reorder the packets to encode a specific bit sequence.

For example, if she sends a packet to computer 1 and then to computer 2, that sequence might designate the binary number 1. But if she sends a packet to computer 2 first, Bob’s StegTorrent program would read the signal as binary number 0. To prevent scrambling due to packet losses or delays, StegTorrent modifies the time stamp on each packet so that Bob can decipher the exact order in which Alice sent them. Our experiments showed that using six IP addresses, Alice can relay up to 270 secret bits per second—enough bandwidth for a simple text conversation—without distorting the transfers or attracting suspicion.

Google Suggest

Alice can also conceal her messages to Bob—and the fact the two conspirators are communicating at all—simply by having him perform a series of innocent-looking Google searches. Our StegSuggest steganography program targets the feature Google Suggest, which lists the 10 most popular search phrases given a string of letters a user has entered in Google’s search box.

Here’s how it works: For Alice to send Bob a hidden note, she must first infect his computer with StegSuggest malware so that she can monitor the traffic exchanged between Google’s servers and Bob’s browser. This can be done using basic hacker tools. Then, when Bob types in a random search term, say, "Robots will…,” Alice intercepts the data traveling from Google to Bob. Using StegSuggest, she adds a unique word to the end of each of the 10 phrases Google suggests. The software chooses these additions from a list of 4096 common English words, so the new phrases aren’t likely to be too bizarre. For example, if Google suggests the phrase "Robots will take our jobs,” Alice might add "Robots will take our jobs tree.” Odd, yes, but probably not worthy of alarm.

Bob’s StegSuggest program then extracts each added word and converts it into a 10-bit sequence using a previously shared lookup table. (Each of the 1024 possible bit sequences corresponds to four different words, making the code more difficult to crack.) Alice can thus transmit 100 secret bits each time Bob types a new term into his Google search box.

To send data faster, Alice could hijack the searches of several innocent googlers in a crowded hot spot, such as an Internet café or a college dormitory. In this scenario, both she and Bob would intercept the googlers’ traffic. Alice would insert the coded words into Google’s suggested phrases, and Bob would extract and decode them. He would pass on only the original phrases to the googlers—who would never suspect they had just facilitated a secret exchange.

Wi-Fi Networks

Now let’s say Alice wants to secretly send video in addition to documents or text messages. In this case, she might opt to smuggle the stream in a very average-looking wireless transmission.

But not just any wireless network will do. Alice must use a network that relies on the data-encoding technique known as orthogonal frequency-division multiplexing (OFDM). Wireless standards that employ this scheme are some of the most popular, including certain versions of IEEE 802.11, used in Wi-Fi networks.

To understand how to hide data in OFDM signals, you must first know something about how OFDM works. This transmission scheme divvies up a digital payload among several small-bandwidth carriers of different frequencies. These narrowband carriers are more resilient to atmospheric degradation than a single wideband wave, allowing data to pass to receivers with higher fidelity. OFDM carefully selects carriers and divides the bits up into groups of set length, known as symbols, to minimize interference.

In reality, though, a digital payload rarely divides perfectly into a collection of symbols; there will usually be some symbols left with too few bits. So OFDM transmitters add extra throwaway bits to these symbols until they conform to the standard size.

Because this "bit padding” is meaningless, Alice can replace it with secret data without compromising the original data transmission. We call this steganographic method Wireless Padding, or WiPad. Because bit padding is abundant in OFDM transmissions, Alice can send hidden data to Bob at a pretty good clip. A single connection on a typical Wi-Fi network in a school or coffee shop, for instance, could support up to 2 megabits per second—fast enough for Alice to secretly stream standard-definition video to Bob.

About the Authors

Wojciech Mazurczyk, Krzysztof Szczypiorski, and Józef Lubacz wrote "Vice Over IP” in the February 2010 issue of IEEE Spectrum. In 2002, as members of the Network Security Group at Warsaw University of Technology, in Poland, they founded the Stegano.net project to investigate new ways to smuggle data through networks and how to thwart such attempts. After many years spent anticipating evildoers and their machinations, Szczypiorski says his favorite saying comes from Indiana Jones: "Nothing shocks me. I’m a scientist.”



Posted by E.Pomales P.E date 10/06/13 05:46 AM 05:46 AM  Category Communications Engineering

comments (0)  
0:1381056562:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Saturday, October 5, 2013

Larry Burns on Electric Vehicles and the Future of Personal Transportation   Print



The former head of R&D for General Motors, Larry Burns, talks about the convergence of lightweight electric vehicles and self-driving cars in new systems of personal transportation

We’re talking today with Larry Burns, professor of engineering practice at the University of Michigan. Larry was for many years at General Motors serving as vice president of research and development and strategic planning. In this role, he oversaw GM’s advanced technology innovation programs and corporate strategy. He also served on GM’s top decision-making bodies for operations and products. In addition to driving innovation into today’s vehicles, Burns led GM’s development of a new automotive DNA that marries electrically driven and connected vehicle technologies. The goal was to realize sustainable personal mobility with smart cars that are aspirational and affordable. So could you first tell us a little bit about your history with EVs, and yourself, your own involvement with electric vehicles?

Larry Burns: I’d be happy to do that, Susan. I was employed by General Motors really from when I entered college back in the early ’70s until I left General Motors in 2009. From 1998 to 2009, I was the corporate vice president of research and development for GM and also had the position of head of planning and strategic planning for the company. So I was very involved with product planning from the mid-’90s until I left, as well as the more strategic direction of GM.

Certainly all of the issues associated with the future of transportation technology, I became very involved in. It wasn’t just one thing. Battery electric vehicles are certainly a very important part of the future, but hybrid electric vehicles were important, fuel-cell electric vehicles were important, alternative sources of energy, biofuels and diesel, cleaner gasoline, they’re very important, improvements in combustion engines, and then materials. So I had a chance to get exposed to a very wide spectrum of individual technologies, but I think more importantly, how they all fit together to try to move forward, to deal with the very pressing issues that the auto industry was facing, as well as the world was facing.

Let’s face it: When you look at when the automobile was invented by Karl Benz 137 years ago, and it was popularized by Ford 110 years ago, you begin to conclude that even today our cars and trucks are very, very similar to those century-old designs. We haven’t had a major transformation in how people and goods move around and interact economically and socially over that century, unlike other industries—take telecommunications, for example. So as head of strategy, as well as head of R&D, I was very, very interested in whether such a transformation was imminent, and it wasn’t any one technology that I was especially betting on as much as it was the convergence of all of the opportunities that were surfacing in the late ’90s, and then over the 2000 to 2010 time frame.

Susan Hassler: Well, the reason I called you, of course, is because we published a story by Ozzie Zehner called "Unclean at Any Speed” that was very provocative. It basically asked, How can you evaluate how green electric vehicles are? So can you talk a little bit about why it’s hard to understand how green some of these new technologies are?

Larry Burns: Absolutely. First of all, life-cycle analysis is an extremely important tool, and it’s one that we need to continue to apply. And the life cycle isn’t just when you’re driving your car. It’s also when you manufacture the car, design and engineer it, the materials that you use, the degree to which those materials are recyclable, the energy required to create those materials, as well as the energy required to transport the materials to plants, and vehicles from the plants to their destination. And we need to understand all of those factors. And the most important thing to predict is, What are consumers really going to want? Because to solve our climate-change challenges and our energy-diversity challenges, our air-pollution challenges, our highway-fatality challenges, we have to bring about huge change at a huge scale. And we’re not going to be able to do that with small penetrations of special types of technologies, whether they’re hybrids, and, yes, it’s exciting that hybrids have reached volumes of, you know, 3.5 million per year being produced. That’s still a very small market share compared to what’s being done with combustion engines.

So you can’t predict any one technology’s development path, and you have some very serious issues that you’re trying to address, and so you need to have a portfolio of opportunities. And these things don’t play out individually—technology doesn’t play out individually—so we do a lot of great work on lithium-ion batteries that’s fantastic and that stimulates electric vehicles, which helps us make better motors and power electronics. And lo and behold, those power electronics and motors are the same ones that may move really new fuel-cell vehicles forward, or they may contribute to a better form of a hybrid.

The other thing that we need to get at, Susan, which is extremely important, is the mass of the vehicles that we ride around in. A typical car weighs 3000 to 4000 pounds, and when you burn a gallon of gasoline in that car, about 25 percent of it actually turns into torque, or power, to move the car. The rest is lost as heat. And if the car weighs 3000 pounds, and a person weighs 150 pounds, when you play out the arithmetic, you wake up and realize that only 1 percent of BTUs in a gallon of gasoline are moving the person driving it.

Susan Hassler: So if it’s five people in the car and some dogs, it makes sense to drive in your Buick. But if it’s one person and a dog, and maybe nothing else, it maybe doesn’t make sense.

Larry Burns: Absolutely. Yeah, and that’s part of the complexity here. I love the debate that’s been stimulated by Ozzie’s article. I think it’s a very healthy debate to be taking place, and I think IEEE Spectrum’s doing a great job keeping that discussion going. I want to add a few dimensions to that discussion, and it has to do not just with the technology, and whether it’s green, but it has to do with how the technology is used.

So when I use my vehicle, which is a Buick Enclave, and I have two children, I have two dogs, and my mother-in-law lives with us, and when we travel together, a 3-hour drive to an area where we like to spend our leisure time, that’s not too bad when you have that car totally filled with people. But when I commute in that car, that’s not a very good proposition, right? I’m using 1 percent of my BTUs to move me, and I’m moving mostly the mass of the car.

So whether it’s public transit, which sometimes the bus is full, but a lot of times the bus is 10 percent full, and you run the numbers on the utilization of the 10-percent-full bus, and it doesn’t look too green. So people have to be careful about grabbing onto one thing: Public transportation’s the answer, and it’s green! I think shared transportation is an enormously important part of our future. I happen to think it could be a vehicle that we share that’s not a 64-passenger bus; it could be a two-person pod that gets shared. Why do I like that idea? Well, that pod might weigh 500 to 1000 pounds, and when it’s down to that mass, it’s much more amenable to plug-in electric, and that gives you three times more efficiency than combustion.

And you take the mass down by a factor of four, and the combustion up by a factor of three, and you’re looking at 12 times on efficiency, as well as CO2opportunities. Tesla is a marvelous accomplishment. I give them standing ovations for their ingenuity and their engineering, but they’re still moving a vehicle that weighs on the order of 4000 pounds. And when you drive that by yourself, even though you’re using it as a plug-in electric, you’re moving way too much mass relative to yourself.

Susan Hassler: What kind of things is your group working on at the University of Michigan? And also, what kind of things are you doing with Google? Because then you’re talking about self-driving cars, right?

Larry Burns: Well, yes. Where it gets exciting is the holistic opportunity that surfaces when you combine connected vehicles. A connected vehicle is basically a vehicle that communicates with things along the roadway system and with other vehicles, and you can get content brought in. So OnStar is an example of a connected vehicle. You’ll hear the term "telematics” as well. But connectivity is here. It’s happening, whether it’s in your navigation system or your Bluetooth system, whatever it is, you’re connected as you’re moving.

You combine that with autonomous vehicles, and those are vehicles that literally drive themselves. And that’s what Google’s working on, as well as many other companies. General Motors, Toyota, Daimler, BMW, all of those companies are working hard to push the limits of how far we can take technology so that vehicles can drive themselves.

Then you marry that up with shared vehicle systems. So we’ve all heard of Zipcar, RelayRides, Car2go—those are examples of a conclusion which is, "Geez, why are people buying all these cars and then having them be parked 90 to 95 percent of the time?” Whereas if we shared those cars, we could have those cars utilized 70, 80 percent of the day and dramatically reduce the parking challenge.

Better yet, as a user of that vehicle, I can get dropped off at my door if I combine that with a driverless system. So now you put those things together, and you can begin to think about tailoring the designs of the vehicles to be much lower mass. Ninety percent of the trips in the U.S. are one- and two-person trips. So if we can design a vehicle that’s tailored to the one- to two-person trip, that happens to weigh less than 1000 pounds, that happens to be shared, that happens to not need a driver so you can use your time as you like, and it can reposition itself when it’s dropped you off and pick up somebody else, suddenly you begin to see this world of a totally different mobility system that could be far less costly.

The point I’m trying to make here is we need to think about transforming the entire mobility system as a system. That’s what the Michigan Mobility Transformation Center is focused on, and the work I do with Google is really part of the self-driving vehicle program.

Everything I’ve mentioned is starting to converge. We’ve got electric vehicles. We’ve got shared vehicles. We have connected vehicles. We know how to tailor designs. And autonomous is the next piece that will fall into that puzzle. And when that fits into that piece, I think we’re going to have a transformation. And for the heated debate, passion around "Are battery-electric vehicles green?” really I think is a small discussion compared to what is the opportunity that we see down the road to really begin moving ourselves around in ways that make much, much better sense than moving around with 4000-pound cars, whether they’re electric cars or combustion cars.

Susan Hassler: Thank you very much for talking with us today, Larry.

Larry Burns: Oh, it’s my pleasure. And I hope this can add constructively to the dialogue that you’ve already created.

Susan Hassler: We’ve been talking with Larry Burns, professor of engineering practice at the University of Michigan and formerly head of General Motors research and development program. For IEEE Spectrum’s "Techwise Conversations,” I’m Susan Hassler.

Photo: Brian Kersey/Reuters



Posted by E.Pomales P.E date 10/05/13 07:30 AM 07:30 AM  Category Mechanical Engineering

comments (0)  
0:1380976443:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Saturday, October 5, 2013

A Digital Jigsaw Puzzle   Print



An Israeli group is scanning and reassembling 250 000 document fragments that are hundreds of years old

It’s a classic trope of spy thrillers—a key document gets shredded and has to be put back together again. Sometimes a shredder’s entire bin of paper has to be reconstructed.

What if the bin consisted of a quarter of a million fragments of an unknown number of documents? What if some of them were old—like 600 to 1000 years old? What if there were no bin, and the fragments were spread out over more than 70 libraries and private collections worldwide? That would make for quite a spy thriller.

There is such a thriller, or at least there is such a collection of documents. It’s called the Cairo Genizah, and the Q in this scenario—remember Q? He was the tech guy in the James Bond stories—Q in this scenario, or one of them, is Roni Shweka, of the Friedberg Genizah Project. He has a Ph.D. in Talmudic studies from Hebrew University but also a bachelor’s degree in computer science; he speaks three languages, one of which is Aramaic; and he’s my guest today. He joins us by phone from Jerusalem.

Roni, welcome to the podcast.

Roni Shweka: Hi. Glad to be here.

Steven Cherry: So, what are these documents, why are they so important, and if they’re so important, why are they in fragments?

Roni Shweka: Okay, so this collection is really unique in the whole world. It was found about 100 years ago in an old synagogue in Cairo. And the synagogue was active from the end of the ninth century until the 19th century, over 1000 years. And it happens that there was a big room in the synagogue, in the women’s gallery, where they used to throw manuscripts, torn manuscripts, worn manuscripts after they had been used. Instead of throwing it to the garbage, they were throwing it in this room. And why is that? Because according to the Jewish [inaudible], you’re not allowed to throw holy scriptures to the garbage. Any fragment, any paper with the name of God on it, should be buried in the ground and not be thrown in the garbage. So instead of throwing it in the garbage, they were throwing it into that room in the synagogue.

And so it happened that this room contains now a collection of about a quarter of a million fragments spanning from about the ninth century to the 19th century, most of them representing books that were lost otherwise and unknown to us.

Steven Cherry: And I guess there are thousands, tens of thousands, of authors, most of whom are unknown. But I guess one known author isMaimonides, and he’s kind of the big name here, right?

Roni Shweka: Yeah. It’s very interesting, because Maimonides actually used this synagogue as his office. Maimonides lived in old Cairo in the end of the 12th century, and he was actually using the synagogue as his office. And when he was writing his books, after writing the draft, he would just take the draft and throw it in this room. So at this location, we found many drafts in Maimonides’s writing of his works in this room, and it’s very astonishing to see the way of thinking and what was the first version, until he completed the book.

Steven Cherry: And we should mention he was one of the great Talmudic scholars, but he was also popularly known for a book that in English is known as The Guide for the Perplexed.

Roni Shweka: Exactly.

Steven Cherry: So how did the collection get to be scattered all over the world? I mean, do collectors say, "Oh, I see 3 kilos of the Cairo Genizah are up for auction, I’m going to bid on that?”

Roni Shweka: Okay, so the collection was discovered in the end of the 19th century. From 1860 and on, manuscript dealers were taking some fragments from the room and selling it to public libraries and university libraries and Oxford and Cambridge, and so on. But in 1896, Solomon Schechter came from Cambridge and emptied the room, taking about 60 percent of all the fragments to Cambridge University Library, where they are still today in the Taylor-Schechter Collection. The other 40 percent was scattered all over the world by other collectors, dealers and so on. And today you can find some remnants in almost every big library or public library in Europe and North America.

Steven Cherry: So it’s impractical to assemble them all in one place. So the idea was to first catalog them and then scan them, and I guess some of this started in 2006?

Roni Shweka: Yeah. The cataloging process went along very slowly, and even today we don’t have a full catalog of all the Genizah fragments. What the Friedberg Genizah Project did when it was starting to work in about 2006 was to first build an inventory of all the fragments in the world, all the Genizah fragments—there was not such an inventory before—and then to digitize systematically all the fragments in all the libraries in all the universities. And today, after six, seven years from the beginning, we can proudly announce that we were successful to digitize almost every Genizah fragment in the world. We now have a website available to the public, about 450 000 digital images of the Genizah fragments.

So the Friedberg is a project that’s actually a privately funded project, the philanthropist from Toronto, Albert Friedberg, and this software was developed together with our partners, professor Lior Wolf and professor Nachum Dershowitz from the school of computer science at Tel Aviv University. [Editor’s note: The Friedberg Genizah Project was founded by computer scientist Yaacov Choueka, who is Shweka’s father.]

Steven Cherry: So let’s talk about the technology [PDF]. Is the scanning producing images that will be matched up, or is the computer trying to OCR it—that is, turn it into characters, which, by the way, seems almost insane.

Roni Shweka: Okay, so the first thought was only to digitize the fragments and to make them available to the public. This alone was a giant step. Until then, one would go visit Cambridge or visit Oxford, visit another library in the United States. And now you can actually access the images from your computer, so this already was a giant step. But after they were finished, we started to think maybe we can do something with this collection of digital images. Maybe the computer can help us identify or catalog or teach us something about the fragments.

So every image now is going through a long process of minimization, segmentation, and so on. We did not succeed to do OCR yet. The technology is not mature enough yet, because the handwriting is very diverse on the fragments, and the fragments are in very bad shape, usually. But what we can do is to obtain the physical measurements of the fragments, the number of lines, the density of the characters, and so on, and we were able to represent the handwriting style of the fragment by a numeric vector. And now through comparing two numeric vectors that represent two fragments, we can predict if these two fragments were written by the same scribe or written by a different scribe. This has been the mission.

Steven Cherry: Now, you know, I started out by sort of jokingly talking about shredded documents. These are not shredded documents. Basically, these are mostly whole pages, but some of them are individual documents in their own right. But many of them are parts of a larger document, is that right?

Roni Shweka: Yeah, what happened is, many times the same page was torn into scraps, and these scraps are found today in different libraries in different countries, maybe on different continents. You can find a half page in New York in the Jewish Theological Seminary, and the other half of the page in Cambridge, in England. Maybe the same page was torn into several scraps, let’s say three or four, and every part will be found in another library.

Steven Cherry: So let’s say in a jigsaw puzzle you might want to put all the pieces with the blue sky off to one side. Is there anything like blue sky here? What about the physical paper itself, and maybe how it ages. Is that helpful? And I should ask, Is it all paper?

Roni Shweka: No. Some parts are in vellum. Most of it is paper, but about 30 percent of it is vellum.

Steven Cherry: And so can you judge anything from the paper or the vellum itself?

Roni Shweka: We tried automatically to identify by the color if the fragment is vellum or paper. We had some success, but not enough. Let’s say we could predict in about 80 percent of the cases if this is vellum or paper, and this is not good enough. We cannot rely on this prediction. What we are trying to do is actually to rely on the contained handwriting style, and of course you can use the other measurements, as I mentioned before, like the text density. If we are talking about full, complete pages, you can also compare the number of lines, and so on.

Steven Cherry: And so there’s actually been software written to take advantage of those features?

Roni Shweka: Yeah.

Steven Cherry: And is there an artificial intelligence component to that software?

Roni Shweka: There is a lot of machine learning, because actually the program that tries to evaluate the similarity of the handwriting, it’s based on the premise that first gives the program a few thousand pairs that are known to be joined and adds more and more pairs that are known to be nonjoined, and this is the premise for the program, is learning from this training set what are the features that make a good join. Maybe there are special letters that can predict better than other letters, and so on. We actually cannot do reverse engineering and understand every time why the system deems a similarity mark high or low in similar cases, because it’s very sophisticated and complicated. But actually the system is based on logic, eventually, based on the pairings that we gave it before.

Steven Cherry: So what do scholars expect to learn from these documents, if and when they all get put together?

Roni Shweka: Okay, so some of the documents are actually from a complete codices, books. Many of these books were lost to us, and they were found only in the Genizah. These books, some of them are rabbinic literature from the ninth, 10th, 11th century, some of them represent lost works from the classical period, from, let’s say, the end of the Talmud. I’m talking about the second century, and the most-known discovery of the Genizah is actually fragments from the Hebrew version of the Ben Sira book. It was known to us only by the Greek translation that was made in the second century B.C., and in Genizah, for the first time, fragments from the original Hebrew version, and so on.

Another aspect of the Genizah collection is the documentary fragments. We found in the Genizah many, many documents that actually tell the story of the common man, how they lived in the 11th, 12th, 13th century, how they were doing business, where they were traveling, what was the cost of living, what they were wearing, what they were eating, and so on and so on. So it’s really important for the social study and historical study of Jews and non-Jews of this period in Middle Eastern communities.

Steven Cherry: Yeah, the New York Times article about this project said that one of the documents describes a sort of medieval takeout food.

Roni Shweka: Yeah. And we have also many other documents that give us receipts from doctors and so on. I mean, it’s the whole world. Every aspect you can think of in regular life is represented in Genizah. You can find also personalities—a husband writing to his wife, son writing to his father, a teacher is complaining about his student to the father of the student, and so on. The student is not learning, you need to punish him, and so on. So you can find almost every aspect of real life, but from 700, 800, 900 years ago. Very rare that we find such evidence in other cultures.

Steven Cherry: Well, Roni, I think the combination of 21st-century technology and 10th-century documents is irresistible, and indeed we did not resist, and so I thank you for taking on this work, and thank you for joining us today.

Roni Shweka: Thank you for having me.

Steven Cherry: We’ve been speaking with Roni Shweka of the Friedberg Genizah Project about using software to digitally join together a quarter of a million document fragments from hundreds of years ago.

For IEEE Spectrum’s "Techwise Conversations,” I’m Steven Cherry.

Image: The Princeton Geniza Project

This interview was recorded Wednesday, 17 July 2013.
Segment producer: Barbara Finkelstein; audio engineer: Francesco Ferorelli
Read more "Techwise Conversations,” find us in iTunes, or follow us onTwitter.

NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.


Posted by E.Pomales P.E date 10/05/13 07:27 AM 07:27 AM  Category Software Engineering

comments (0)  
0:1380976179:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Saturday, October 5, 2013

These Robots Will Stop the Jellyfish Invasion   Print


http://i3.ytimg.com/vi/s_wvsotunio/default.jpg

Jellyfish* are serious business. If you get enough of them in one place, bad things happen. And we're not just talking about some mildly annoying stings, but all-out nuclear war. Obviously, we have to fight back. With ROBOTS.

In South Korea, jellyfish are threatening marine ecosystems and are responsible for about US $300 million in damage and losses to fisheries, seaside power plants, and other ocean infrastructure. The problem is that you don't just get a few jellyfish. I mean, a few jellyfish would be kind of cute. The problem is that you get thousands of them. Or hundreds of thousands. Or millions, all at once, literally jellying up the works.

Large jellyfish swarms have been drastically increasing over the past decades and have become a problem in many parts of the world, Hyun Myung, a robotics professor at the Korean Advanced Institute of Science and Technology (KAIST), tells IEEE Spectrum. And they aren't affecting just marine life and infrastructure. "The number of beachgoers who have been stung by poisonous jellyfish, which can lead to death in extreme cases, has risen," he says. "One child died due to this last year in Korea."

So Professor Myung and his group at KAIST set out to develop a robot to deal with this issue, and last month, they tested out their solution, the Jellyfish Elimination Robotic Swarm (JEROS), in Masan Bay on the southern coast of South Korea. They've built three prototypes like the one shown below.

jeros

The JEROS robots are autonomous, able to use cameras to locate jellyfish near the surface, Professor Myung explains. The sequence below shows how an on-board computer processes an image and identifies a jellyfish on the water.

Once the robots have found a group of jellyfish, they team up and float around in formation:

Due to the large number of jellyfish, developing some sort of catch-and-release mechanism is just not feasible, so the robots are equipped with
hardware that would probably be considered inhumane to use on anything with a backbone. The following video is NSFLOJ (Not Safe for Lovers of Jellyfish):
 

Together, the JEROS robots can mulch approximately 900 kilograms of jellyfish per hour. Your typical moon jelly might weigh about 150 grams. You can do the math on that (or we can, it's about 6,000 ex-jellyfish per hour), but the upshot is that we're going to need a lot of these robots in order to make an appreciable difference.

Professor Myung says that, because the robots are designed to work cooperatively, adding more units shouldn't be a problem, and his team is already planning more tests in their efforts to deter the gelatinous invaders.

*Jellyfish are, of course, not fish, so we really should be calling them "jellies" or "sea jellies."

KAIST ]

Images: Hyun Myung/KAIST



Posted by E.Pomales P.E date 10/05/13 07:22 AM 07:22 AM  Category Robotics

comments (0)  
3:1380975896:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Saturday, October 5, 2013

Rethink Robotics Upgrades Baxter to 2.0 Software   Print


http://i3.ytimg.com/vi/bOC_1zEq-yc/default.jpg


When we visited Rethink Robotics last year, Mitch Rosenberg, Rethink’s vice president for marketing and product management, told us that "the day you buy the robot is the day that it’ll perform the least well. Over time, your investment will become more and more valuable because the software will become more and more valuable." With the release of Baxter's 2.0 software, value has been added.

he highlights here (from a purely spectating standpoint) are significantly increased speed and buttery smooth moves, but there's plenty to offer actual users as well:

Baxter is now able to pick and place parts at any axis, allowing the robot to perform a broad array of new tasks, such as picking objects off of a shelf, or loading machines in a horizontal motion. The 2.0 software also allows the customer to define waypoints with increased accuracy; users will be able to define the exact trajectory that they want Baxter’s arms to follow simply by moving them. For example, the robot can be taught where to move its arms in and out of a machine. In addition, the 2.0 software enables customers to train Baxter to hold its arms in space for a predetermined amount of time, or until a signal indicates they can begin moving again. This makes Baxter useful for holding parts in front of scanners, inspection cameras or painting stations, and for working more interactively with other machines (i.e., moving its hand out of a machine while it cycles).

In addition to its expanded task capabilities, Baxter with 2.0 software also features a number of overall performance improvements. Baxter can now operate at a significantly faster pace, pick and place objects with increased consistency and move more fluidly between points. With improvements to its integrated vision, Baxter now has the ability to detect and distinguish between a broader range of part geometries, further broadening its capacity for variably shaped objects.

If you've already adopted Baxter, upgrading to 2.0 is an easy (and free!) download. And if you haven't already adopted Baxter, we'll let Rethink try to convince you that it's worthwhile when we see the latest software in action at RoboBusiness next month.

Rethink Robotics ]

Image: cstLab


Posted by E.Pomales P.E date 10/05/13 07:20 AM 07:20 AM  Category Robotics

comments (0)  
0:1380975719:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Saturday, October 5, 2013

Robotic Boat Hits 1000-Mile Mark in Transatlantic Crossing   Print



"Scout,” a 4-meter-long autonomous boat built by a group of young DIYers, is attempting to cross the Atlantic Ocean. It is traveling from Rhode Island, where it launched on 24 August, to Spain, where all being well it will arrive in a few months’ time.

Scout has now gone about 1000 miles (1600 kilometers) of its planned 3700-mile (5900 kilometer) journey. Should it complete this voyage successfully, its passage will arguably belong in the history books.

I say "arguably,” because it won’t be the first time a robotic vessel has crossed the Atlantic: Scarlet Knight, a sea-going robot fielded by researchers at Rutgers University, did that in 2009. But Scout stands to beat out Scarlet in my mind, for several reasons.

You see, Scout would be the first robotic surface vessel to make this crossing. Scarlet was what is known as an oceanographic glider, which porpoises up and down, spending most of the time at significant depth.

Okay, maybe the distinction between a surface vessel and an oceanographic glider is too fine for the typical landlubbing roboticist to care about. But there are other reasons to disqualify Scarlet from the record book of autonomous Atlantic Ocean crossings. For one, Scarlet was launched from a ship about 50 miles offshore of New Jersey. And it was recovered by another ship far offshore from Spain. So it didn’t really make a continent-to-continent voyage at all.

If this sounds like a trivial point, then you’ve probably not done much blue-water sailing. Out in the middle of the ocean, there’s not much to hit. Close to shore, however, you’re in the shipping lanes, fishing boats are zigzagging around deploying nets or hauling them in, and recreational traffic increases enormously. If you’re sailing into a busy port, it can feel like you’re dodging giants in the final hours. So the fact that Scarlet didn’t have to endure the most risky parts of a transatlantic journey is significant.

If that’s not good enough for you, consider this: Scarlet could be remotely controlled. Indeed, soon after Scarlet put to sea, it became clear that there was a problem, one that was corrected by uploading some new parameters by radio. (Spectrum’s coverage of Scarlet describes this episode.) Now how autonomous is that?

Also, Scarlet required a pit stop. Off the Azores, technicians from Rutgers caught up with Scarlet to scrape barnacles off its hull. They did that in the water rather bringing it aboard ship. So technically, the glider’s journey was uninterrupted. Technically.

Scout is shooting for a transatlantic record under a different set of rules. It was carried off the beach and into the water by two guys on their backs—and only far enough out so that it’s keel would clear the bottom. Scout sends telemetry updates three times an hour using an Iridium transmitter, but no new instructions or parameters can be sent to it. It must navigate autonomously.

Completely autonomous operation means pesky little problems like that one that dogged Scarlet initially can’t be fixed after the fact, making the enterprise that much more difficult. The Scout team, too, realized their boat had a problem soon after it set off—a software glitch caused it to ignore many of the offshore waypoints that had been programmed in (the bug was in code intended to prevent the boat from backtracking should currents push it east of a given waypoint). But Scout got off okay and is still cruising toward Spain. The team is pretty sure that it is headed for a waypoint that lies about 150 miles west of its final destination, Sanlúcar de Barrameda.

Although the construction of Scout’s hull is somewhat high tech—carbon fiber sandwiching Divinycell foam—the rest of the boat is comparatively simple. Solar panels mounted on the top of the hull charge a lithium-iron-phosphate battery, which in turn powers an ordinary trolling motor attached to the bottom of the hull. (In good DIY style, the motor was purchased at Dick’s Sporting Goods.) Under ideal conditions, the battery will gain sufficient charge during the day to power the boat’s motor throughout the following night. Under less than ideal conditions, the motor shuts down when battery voltage gets too low, and the boat just drifts until it can get charged up again.

Scout is plenty seaworthy even when it’s not under power, and there’s nobody to feed or entertain, so drifting with the waves for a few days is no big deal. Even raging storms shouldn’t be a problem: An angled upper deck and a hefty amount of lead on the bottom of the keel ensure that the boat will quickly right itself if it gets flipped over.

Scout’s builders also included a couple of simple fail-safe features. One is that the onboard computer (an Arduino, of course!) gets automatically reset every few hours. This should protect it from freezing up should there be a memory leak or other subtle problem with its code.

The other clever measure they took was to program the boat to stop and back up a little ways every five hours. Their thinking was that this maneuver could help to clear the keel and motor of any flotsam it might have picked up. They say that they tested this technique and that it works pretty well.

I hope so. In my estimation, Scout’s greatest challenge will come from encounters with marine garbage, in particular stray bits of fishing line, which could easily foul the prop. Sargassum, a common floating seaweed, could also cause trouble.

That Scout has made it more that a quarter of the way across is encouraging, though. As a great admirer of ambitious DIY endeavors, I will continue to monitor its progress across the increasingly rough North Atlantic over the next few months and cheer the little boat onward through wind and waves from the comfort of my warm and dry office.

Photo: Scout Transatlantic


Posted by E.Pomales P.E date 10/05/13 07:18 AM 07:18 AM  Category Robotics

comments (0)  
0:1380975577:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Saturday, October 5, 2013

Video Friday: PR2 Surrogates, Zombie F-16s, and Bot & Dolly's Box   Print


http://i3.ytimg.com/vi/H0EoEyvTmiY/default.jpg

Deep down inside, I think I might want to be a robot. It's a distinct possibility, anyway. I mean, it would explain a lot about this borderline unhealthy obsession that I've got going on, right? Immersive virtual reality is very close to making that all possible, and all you need is a little bit of hardware. See how it works in today's Video Friday.

By "a little bit of hardware," we're talking about a Razer Hydra and an Oculus Rift. Oh, um, and you'll also need a PR2. But it's totally worth it, man. Totally worth it.
he only problem with this is that you don't get to blame the robot anymore: now, if your PR2 surrogate can't play chess, it's your own darn fault for being too clumsy.
 


Posted by E.Pomales P.E date 10/05/13 06:32 AM 06:32 AM  Category Robotics

comments (0)  
5:1380972865:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Saturday, October 5, 2013

Whoa: Boston Dynamics Announces New WildCat Quadruped Robot   Print


http://i3.ytimg.com/vi/wE3fmFTtP9g/default.jpg

Boston Dynamics has just updated its YouTube channel with some new videos. One of them is an update on Atlas. Another is an update on LS3. And the third is this: WildCat, a totally new quadruped robot based on Cheetah, and out of nowhere, there's this video of it bounding and galloping around outdoors, untethered, at up to 25 km/h (16 mph). Whoa.

 
 

Here's the video caption:

WildCat is a four-legged robot being developed to run fast on all types of terrain. So far WildCat has run at about 16 mph on flat terrain using bounding and galloping gaits. The video shows WildCat's best performance so far. WildCat is being developed by Boston Dynamics with funding from DARPA's M3 program.

This video was only just posted (perhaps half an hour ago), and the Boston Dynamics website doesn't seem to have any additional information about the robot just yet.

While tonight's unveiling was somewhat of a surprise, we have been expecting this platform to show up at some point. A little over a year ago, when Boston Dynamics had Cheetah sprinting at over 45 km/h (28 mph), we learned that an untethered version called WildCat capable of running outdoors was in the works, and this is the concept image for that robot from September 2012:

That looks pretty close to the actual robot, doesn't it?

WildCat's current top speed of 25 km/h is significantly slower than Cheetah's 45 km/h, but we can only speculate as to whether that's a limitation imposed by the on-board power, the gait, or simply the fact that WildCat is (as far as we know) a newish robot that probably has a lot of refinement in its future. We also don't know how well WildCat might perform outside of a parking lot, or whether it's capable of the same sort of sensor-based obstacle avoidance asLS3 is.

Hopefully, we'll get more info on WildCat from Boston Dynamics in the next day or two, and we'll update this post as soon as we can.

Boston Dynamics ]



Posted by E.Pomales P.E date 10/05/13 06:28 AM 06:28 AM  Category Robotics

comments (0)  
0:1380972625:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Saturday, October 5, 2013

Smart Knife Detects Cancer in Seconds   Print



A new smart knife puts the pathology lab in surgeons' hands by sniffing out cancer cells as it cuts flesh. The  so-called intelligent knife, also known as "iKnife," could allow surgeons to work more swiftly and efficiently to remove cancerous tumors without leaving behind traces of cancer cells.

The iKnife works by detecting cancer cells in the smoke left behind by theelectrosurgical knife's act of cutting flesh, according to Science Magazine. When the iKnife sucks up the smoke, it pipes the sample to a mass spectrometer capable of almost instantly analyzing the chemistry of the biological tissue to detect the presence of cancer. That translates into near-instant feedback for surgeons rather than having to wait on sample analysis by a pathology lab.

Chemists at the Imperial College London showed that their iKnife could accurately identify both normal and cancerous tissue from 3000 tissue samples taken during 300 cancer patient surgeries, as reported in the journal Science Translational Medicine. The iKnife could tell the difference between different biological tissues, such as liver or brain, as well as determine if a tumor represented a secondary growth originating from a primary tumor elsewhere.

The iKnife results matched well with pathology lab results in both testing samples and during 91 cancer surgeries. Surgeons received feedback from the iKnife with just a 1- to 3-second delay. The knife's developers eventually envision a display similar to a traffic light that shows a red light to indicate the presence of cancer, a green light for healthy tissue, and a yellow light for an in-between mix.

Zoltan Takats, a chemist at Imperial College London, hit upon the idea of the iKnife when he realized that electrosurgical knives—also known as "flesh vaporizers"—already represented the ideal tools for ionizing tissue in a way that's perfect for mass spectrometry. Such electric wands have been used by surgeons since 1925, according to National Geographic's Only Human blog. 

Whether or not the iKnife actually improves health outcomes for cancer surgery patients remains to be seen. But the knife appears to take yet another step in the evolution of a centuries-old surgical tool that has changed from simple blade to a relatively bloodless cutting instrument—and now to a real-time diagnostic tool. It combines all the promises of technological advancement that have previously applied separately to cancer diagnosiscancer extraction, and treatment.

Photo: Luke MacGregor/Reuters



Posted by E.Pomales P.E date 10/05/13 06:20 AM 06:20 AM  Category Biotechnology Engineering

comments (0)  
0:1380972087:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Saturday, October 5, 2013

Electronic Skin Lights Up When Touched   Print



Imagine an interactive dashboard or wallpaper inspired by the body’s largest organ

08NW.JaveyEskin
Photo: Ali Javey and Chuan Wang
Supple Skin: Built on plastic the new electronic skin contains arrays of pressure sensors, thin-film transistors, and organic LEDs.

A team of researchers at the University of California, Berkeley, has developed the first user-interactive "electronic skin” that responds to pressure by instantly emitting light.

"The goal is to use human skin as a model and develop new types of electronics that would enable us to interface with our environment in new ways,” explains Ali Javey, an electrical engineering and computer science professor at Berkeley and leader of the e-skin research team.

The electronic skin is made up of a network of sensors placed on thin plastic substrates that can spatially and temporarily map pressure. Javey describes the network as an array of 16 by 16 pixels, each one equipped with a carbon-nanotube thin-film transistor (TFT), a pressure sensor, and an organic light-emitting diode (OLED) on top. When the sensor detects touch, the TFT powers up the OLED, which then emits red, green, or blue light. The harder the pressure, the brighter the light will be. The end product is a thin, flexible material that can be placed on top of all sorts of surfaces.

Takao Someya, creator of a different type of electronic skin and an associate professor at the University of Tokyo’s Quantum Phase Electronics Center, was particularly impressed with the team’s use of carbon nanotubes. "One of their great achievements is to demonstrate feasibility of carbon nanotube field-effect transistors for large-area, flexible electronic applications,” he says.

Javey, who has been working on developing the e-skin for the past five years, has high hopes for his new material. He’d like to create user-interactive wallpaper or a dashboard that responds to cues such as the driver’s eye or body movements. When asked to describe how the interactive wallpaper would work, Javey referred to the scene in Minority Report in which Tom Cruise controls a computer by moving his hands. "That’s the direction we’re heading to—a new type of interfacing,” he says. "Getting rid of the keyboard, getting rid of display, and become in sync with our surroundings so that you don’t have these physical components sitting around. It’s part of the table; it’s part of the wall.”

In Javey’s proposed system, light sensors would read hand and body motions, and pressure sensors would respond to different degrees of touch. But there’s a good deal of work that must be done before we’ll be seeing interactive wallpaper, he says. "We know how to make complex systems on tiny silicon chips, but on plastic it’s a whole different story,” Javey says. "It’s still not a very complex system we have shown, but it’s still one of the most complex systems we have to date on plastic.” He adds that his team is interested in integrating light sensors as well as data-processing and wireless-communication capabilities onto the substrates.

"This is an inspiring development in the plastic device technology, which is likely to make many everyday experiences more stimulating,” says Nicholas Kotov, a professor at the University of Michigan who is working on flexible, stretchable electronics.

John Rogers, a materials science professor at the University of Illinois at Urbana-Champaign, says "the work illustrates the extent to which research in nanomaterials, once confined strictly to fundamental study on individual test vehicles, is now successfully moving toward sophisticated, macroscale demonstrator devices, with unique function. The results provide more evidence that the field is headed in the right direction.”


Posted by E.Pomales P.E date 10/05/13 06:17 AM 06:17 AM  Category Biotechnology Engineering

comments (0)  
0:1380971960:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Saturday, October 5, 2013

The Future of Pharmaceuticals Could Be Electronic Implants   Print



British pharmaceuticals firm betting that "electroceuticals” will treat complex diseases better


It may be a long shot, but why not try it? British drug maker GlaxoSmithKline (GSK) believes the next big wave in medicine will be electroceuticals, a buzzword the company has coined for a technology that would use electrical impulses—rather than the chemicals or biological molecules found in today’s pharmaceuticals—to treat diseases.

Its vision involves more than simply taking medical devices like heart pacemakers—which use electric waveforms to activate or block bundles of nerve cells—to the next level, claims Kristoffer Famm, head of bioelectronics research at GSK. The company is aiming for something much more radical: connecting thousands of tightly packed individual nerve cells with electrodes and associated circuitry to read and interpret the "code” in the collection of nerve-cell fibers that constitute a nerve, and then modulating the code to restore a specific function to a healthy state. "No treatments like this even remotely exist today,” says Famm. "The medicine will speak the body’s language.”

GSK, which is spearheading a major bioelectronics research program, is awarding a US $1 million prize and providing funding for up to 40 researchers working in external labs to further its goal. It has already enlisted several academic centers to participate in the effort, including MIT, the University of Pennsylvania, and the Feinstein Institute for Medical Research, which is already researching the neural codes of several diseases to identify intervention points.

Some medical-device manufacturers, such as Medtronic, the world’s largest, see potential down the road, and possible partnerships too. "Currently we are investing in such technology advancements as device miniaturization and embedding smart sensors and algorithms into our systems, and we are always open to additional new ideas that may lead to new therapies that provide clinical and economic benefit,” says John LaLonde, vice president of product development, technology, and research at Medtronic Neuromodulation. "We do not know enough yet as to how the concept of electroceuticals will play out over the next decade. However, considering our respective areas of expertise, it is possible that a collaboration between the pharmaceutical and medical-device industries could lead to future advancements in the technology.”

Famm compares the process of reading and writing electrical impulses passing through bundles of multifiber nerves to that of signals in network cables transporting digital data. Neurons carry information from one point to another as action potentials, which are spikes of voltage that ripple along a neuron’s length. If researchers can tap into the nerve to either introduce or erase an action potential, Famm says, they will be able to restore an organ or function to a healthy state, for example by coaxing insulin from the pancreas to treat diabetes.

If only such a reset were that easy. Perhaps one of the biggest challenges, says Arthur Pilla, a professor of biomedical engineering at Columbia, is to discover "which bundle of nerves and which signal or regiment of signals you want to program or reprogram.” Modulating or triggering these bundles of nerves in certain ways will also require exploiting "very specific electrical properties of the signals,” argues Pilla, an expert on bioelectromagnetic fields who has been studying their therapeutic effects for decades. And he points to possible problems arising from invasive procedures that require the implantation of devices.

Electroceuticals, unlike the noninvasive electromagnetic solutions Pilla has helped pioneer, will involve embedding microscopic devices. Famm imagines a possible array of nanosize electrodes that interface with nerve fibers and record the firing of individual neurons. Such arrays will need to both record and stimulate such impulses and include an intelligent signal-processing component to guide each.

"While huge strides have been made with electrodes, we don’t have perfectly reliable and durable arrays that you can snap onto any nerve and that can read and write every nerve fiber,” Famm says. "That is one of our holy grails.”

If they are successful, electroceuticals could offer a higher degree of control in modulating biological functions and curing diseases while avoiding unwanted side effects, according to Famm. "Our goal, basically, is to speak the electrical language of the nerves to achieve a higher treatment effect,” he says. "It’s a huge interdisciplinary challenge that will require people with deep expertise in electrical engineering, neural signal analysis, and biological functions related to diseases.”

GSK is planning a global forum in December to bring research leaders together to collectively identify a key hurdle in the field of electroceuticals. The group that overcomes that hurdle will leave a million dollars richer.


Posted by E.Pomales P.E date 10/05/13 06:12 AM 06:12 AM  Category Biotechnology Engineering

comments (0)  
0:1380971626:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Saturday, October 5, 2013

Sun Catalytix "Artificial Leaf" Can Heal Itself   Print



The so-called "artificial leaf" is continuing to grow up, with an announcement this week at the American Chemical Society meeting that the device canessentially heal damage it sustains during energy generation processes on its own. This would allow Sun Catalytix's device—a "catalyst-coated wafer of silicon"—to run in the impure, bacteria-laden water found out in the world instead of just in pristine laboratory conditions.

The artificial leaf actually mimics only a part of the photosynthetic process found in plants. Drop the leaf into some water and expose it to sunlight, and the catalysts on its surface break down water into hydrogen and oxygen. Those bubbling gases can be collected and stored to be used as energy.

"We figured out a way to tweak the conditions so that part of the catalyst falls apart, denying bacteria the smooth surfaced needed to form a biofilm. Then the catalyst can heal and re-assemble," said Daniel Nocera, founder of Sun Catalytix and a professor at MIT, according to a press release.

The company has been touting the leaf as a cheap and easy solution to global issues of energy poverty. Nocera says that as many as 3 billion people lack access to "traditional electric production and distribution systems," and that a simple device one drops in a bucket of water—even dirty water, with the latest development—could provide standalone electricity to those multitudes. A couple of years ago, the company's chief technology officer Tom Jarvi told me a bit more cautiously that because "the inputs are light and water, and the output is fuel, one can certainly see the applicability of something like that to the developing world." The economics of really reaching such an audience are probably still in question, and there is still a step missing: convert the fuel the leaf creates into something readily usable in generators or even cars. The leaf is also relatively inefficient, well below 10 percent, compared to 15 to 20 percent efficiency for solar panels.

Still, back in 2011 I wrote this about the Sun Catalytix leaf:

Jarvi says the company expects to be able to bring the device to the point where a kilogram of hydrogen could be produced for about US $3. Given that a gallon of gasoline contains about the same amount of energy as 1 kg of hydrogen, as long as gas prices stay north of $3 per gallon, this would make a cost-effective fuel source.

Let's see, what have U.S. gas prices done since then? Okay: never dropped below $3.22 and scared $4.00 once or twice. Looks like we're still on track there.

Photo: mediaphotos/Stockphoto



comments (0)  
0:1380971309:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Saturday, October 5, 2013

Colorado River Hydropower Faces a Dry Future   Print



10NColoradoCharlesPlatiauReuters

Photo: Charles Platiau/Reuters
Low Energy: There’s less water than ever behind Glen Canyon Dam and other hydropower generators on the Colorado River.
 

Last year, the Hoover Dam hydroelectric plant installed the first of five wide-head turbines. These are designed to work efficiently even as the Colorado River shrinks under a record-long drought. The dry spell affecting the dam’s power source has outlasted any other in the 77 years that the structure has generated electricity. By the time the fifth turbine is installed in 2016, Hoover Dam will likely need them all.

Lake Mead, which sits on the border between Nevada and Arizona behind Hoover Dam, is expected to drop 2.4 meters in 2014, as less and less water flows downstream from Lake Powell, which straddles Utah and ­Arizona. The sharp decline comes about because the U.S. Bureau of Reclamation needs to cut Lake Powell’s water release by ­nearly 1 billion cubic meters to 9.2 billion m3 for the 2014 water season, the smallest release since the lake was filled in the 1960s. The flow of water to Lake Powell­ from key tributaries has been decreasing for more than a decade, and the bureau’s forecasters expect that the reservoir could hit an all-time low this season.

"This is the most extreme drought since measurements began in the early 1900s,” says Jack Schmidt, a professor of watershed sciences at Utah State University and current chief of the U.S. Geological Survey’s Grand Canyon Monitoring and Research Center. A heavy snowfall this winter could change everything, but "no one knows when this will end,” he adds.

The five new wide-head turbines being installed at the Hoover Dam are meant to keep the power plant working with less ­water in the lake. "We’re trying to increase the ­power we can get from decreasing levels of ­water,” says Rob Skordas, area manager of the Lower Colorado Dams office of the ­Bureau of Reclamation. Skordas says the new turbines should function well even if the ­water elevation falls to 305 meters above sea level, far below the historical ­average of 358.

Lake Mead, however, was already down to 337 meters in late August, when power capacity at Hoover Dam was at 1735 megawatts, down from a full capacity of 2074 MW. As water levels continue to decline, power output could fall even farther.

Upstream at the Glen Canyon Dam, power production is expected to be 8 percent lower than in 2013 as a result of the lower water, according to Jane Blair, manager of the bureau’s Upper Colorado power office. The bureau estimates that the Western Area Power Administration will have to spend about US $10 million to meet its electricity supply obligations.

If the situation doesn’t improve, Glen ­Canyon Dam could have even bigger problems. When the water level drops below 1063 meters, just about 30 meters below its August levels, vortex action would draw air into the turbines and damage them. Power generation would then likely cease at Glen Canyon, says Blair. Currently, engineers at Glen ­Canyon aren’t looking to install any wide-head turbines like those at Hoover Dam.

If the drought cycles become longer and more severe, hydropower and otherpower needs will continue to take a backseat to ­water supply for the southwest region of the United States and California. On the ­Colorado River, there is a total hydro­power generating capacity of 4178 MW, but many of the plants are already operating below their measured capacities because of the drought. Nearly 30 million people depend on the river for drinking water and irrigation. "Producing hydropower is clearly essential to maintaining a secure energy system,” says Schmidt. "But in the grand scheme of things, water only comes from one place, and electricity comes from lots of places.”

If electricity has to come from somewhere else, delivering drinking water to some of the largest cities in the western United States could be particularly problematic. Nearly 30 percent of the energy from Hoover Dam goes to the Metropolitan Water District of Southern California, which provides drinking water to nearly 19 million people across 26 cities and water districts. Less power also means less money for various water quality and environmental studies that inform how the water from the Colorado River should be allocated.

Many experts would like this year to mark the end of the drought cycle, but all they can do now is hope for snow while planning for withering water resources. "I think it’s fair to say everybody involved with the river is hoping this coming winter is the snowiest winter on record,” says Schmidt.

A correction to this article was made on 20 September 2013.



Posted by E.Pomales P.E date 10/05/13 06:03 AM 06:03 AM  Category Energy & Petroleum Engineering

comments (0)  
0:1380971198:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 4, 2013

GE to Muscle into Fuel Cells with Hybrid System   Print



General Electric is working on an efficient distributed power system that combines proprietary fuel cell technology with its existing gas engines [like the one in the photo].

The company's research organization is developing a novel fuel cell that operates on natural gas, according to Mark Little, the director of GE Global Research and chief technology officer. When combined with an engine generator, the system can convert 70 percent of the fuel to electricity, which is more efficient than the combined cycle natural gas power plants powering the grid.

The fuel cell will generate electricity from reformed natural gas, or gas that's treated with steam and heat to make hydrogen and oxygen, he says. Residual gases from the fuel cell process—a "synthesis gas" that contains carbon monoxide and hydrogen—would then be burned in a piston engine to generate more electricity. The waste gas that comes from the fuel cell needs to be specially treated but "we know we can burn these things. They’re well within the fuel specs of our current engine,” Little says.

This distributed power system could provide electricity to a small industrial site or a data center, for example. It would replace diesel generators that are often used to power remote locations or bring electricity to places without a central grid. 

GE sells engines from two companies it acquired, Austria-based Jenbacher and Wisconsin-based Waukesha. It has done its own research on solid oxide fuel cells, and in 2011, it invested in Plug Power, which makes fuel cells for homes and small businesses. But Little indicated that this distributed power system will use new fuel cell technology invented by GE and configured to work in tandem with GE's engines. "We have a real breakthrough in fuel cell technology that we think can enable the system to be distributed and yet work at a very high efficiency level,” he says.

Commercial customers are showing more interest in stationary fuel cells and natural gas generators because they can provide back-up power and potentially lower energy costs. GE's system, which is still a few years a way from commercial availability, will be aimed at customers outside of the United States, Little says. Because the United States has relatively cheap natural gas, the combined power generation unit is unlikely to be cost competitive with grid power there. However, the price for natural gas in many other countries is more than double that in the United States and the hybrid power generation unit will "compete beautifully,” Little says.

GE's hybrid fuel system is just one of many research efforts the conglomerate has underway to take advantage of unconventional oil and natural gas drilling. Among the projects now being considered at a planned research center in Oklahoma is a way to use liquid carbon dioxide as the fluid to fracture, or frack, wells, rather than a mixture of water and chemicals. The company is developing a hybrid locomotive engine that can run on both diesel and natural gas. And it is working on small-scale liquid natural gas fueling stations that could be placed along railroad lines.

In another effort, GE is developing sensors and software to make oil and gas wells smarter. Researchers are working on different types of photonic sensors that are able to withstand very high heat and pressure. These  would be better  than electronic sensors for gathering flow and fluid composition data within wells, according to GE researchers.

Image credit: GE


Posted by E.Pomales P.E date 10/04/13 08:09 AM 08:09 AM  Category Energy & Petroleum Engineering

comments (0)  
4:1380892234:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 4, 2013

World’s Largest Solar Thermal Plant Syncs to the Grid   Print



The Ivanpah Solar Electric Generating System delivered its first kilowatts of power to Pacific Gas and Electric (PG&E) on Tuesday.

The world’s largest solar thermal plant, located in the Mojave Desert, sent energy from its Unit 1 station to PG&E, which provides power to parts of Northern California. When the plant is fully operational later this year, it will produce 377 megawatts. Two of the plant's three units will supply energy to PG&E and the other will deliver power to Southern California Edison.

"Given the magnitude and complexity of Ivanpah, it was very important that we successfully complete this milestone showing all systems were on track," Tom Doyle, president of NRG Solar, one of the plant’s owners, said in a statement.

The massive project spans more than 1400 hectares of public land and will double the amount of commercial solar thermal energy available in the United States. There are other large concentrated solar power (CSP) projects in the Middle East and Spain, but most of the growth in solar in the United States has come from photovoltaic (PV) panel projects, which have come down considerably in price in recent years.

Even with the proliferation of cheap solar PV, there is other value in CSP projects, which use large mirrors aimed at large central towers that create steam to drive turbines. A study earlier this year from National Renewable Energy Laboratory (NREL) found that a concentrated solar facility would be particularly useful for providing short-term capacity when other operators are offline or as a peaker plant when demand is highest. And steam turbines, unlike intermittent wind and solar PV, offer a steady power supply that operators can turn on or off or fine-tune on demand.

Google, which has invested heavily in renewable energy projects—including $168 million it put into Ivanpah—also sees value in CSP. "At Google we invest in renewable energy projects that have the potential to transform the energy landscape. Ivanpah is one of those projects,” Rick Needham, director of Energy and Sustainability at Google, said in a statement. In addition to generation, Google's investments in wind and solar include a solar financing company and the Atlantic Wind Connection project.

And it just wouldn't be an energy project without some criticism. Ivanpah's creators have been chided for the plant's potential to transform the physical landscape—especially its impact on the desert ecosystem and desert tortoises in particular. But some environmentalists see the risk as an acceptable one if utility-scale solar installations are replacing coal-fired power plants. California has a goal to get 33 percent of its electricity from renewables by 2020.

Photo Credit: Brightsource Energy


Posted by E.Pomales P.E date 10/04/13 08:07 AM 08:07 AM  Category Energy & Petroleum Engineering

comments (0)  
0:1380892131:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 4, 2013

DOE Maps Path to Huge Cost Savings for Solar   Print



The price of a solar photovoltaic module has dropped dramatically over the last few years. But to get solar installations down toward ideal price points, the cost of making the panels isn't the only thing that needs to come down: so-called "soft costs" represent half or more of most solar installations. These costs include permitting, labor, inspection, interconnection (if you're going grid-connected, at least), and others, and the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) thinks we can cut those down to size as well.

In a new report, NREL maps out a way to bring soft costs down from $3.32/watt in 2010 for a 5-kilowatt residential system to $0.65/watt in 2020. For small commercial systems below 250 kW, the report suggests a drop from $2.64/watt in 2010 to $0.44/watt in 2020. These soft cost reductions would allow the U.S. to reach the Department of Energy's SunShot Initiative goals of $1.50/watt and $1.25/watt for residential and commercial installations, respectively.

But first, the bad news: if the current trajectory of soft costs continues, those SunShot goals will not be met. Achieving the extra cost reductions necessary to get there won't be trivial, especially for residential installations—in fact, an additional $0.46/watt is needed beyond the current trajectory, a sizable amount when we're gunning for $0.65/watt in total. Financing and customer acquisition costs are most likely to get there without much help, while permitting and interconnection need some help. That help could take the form of streamlined inspection processes and a standardized permitting fee that is substantially lower than what currently exists. The average permitting fee now, though it varies widely across jurisdictions, is $430; NREL suggests bringing that to $250 across the board.

Commercial systems, meanwhile, need only $0.11/watt beyond current trajectory in order to achieve the SunShot goals. Labor costs may come down easier than with residential systems; the report suggests that universal adoption of integrated racking, where modules arrive at a site already assembled and ready for installation, is one method for dropping costs in the right direction.

In general, soft costs are increasingly recognized as perhaps the primary barrier to bringing solar prices down into the truly competitive range. And that seems to go for manufacturing of solar panels as well as for installations: Arecent paper in Energy and Environmental Science compared costs of solar manufacturing in China and the U.S., and found soft costs including labor and supply chain are the biggest differences. If the U.S. wants to keep up with the world's biggest solar manufacturer, working on those costs unrelated to materials is a good place to start. And they better hurry: the cost of building a PV module at major companies in China is going to drop all the way to $0.36/watt by 2017, according to one recent report. With module prices continuing that sort of decline, focusing on the soft side of solar is getting more and more important.

Photo: Tim Boyle/Bloomberg/Getty Images


Posted by E.Pomales P.E date 10/04/13 08:03 AM 08:03 AM  Category Energy & Petroleum Engineering

comments (0)  
0:1380892000:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 4, 2013

China's New Solar Price   Print



A decade ago, when IEEE Spectrum was preparing a special issue on China's tech revolution, a colleague sitting in an airplane heard somebody behind her exclaim, "But what's the China price?" What he was asking, like everybody in business then and since, was what the Chinese were charging for products in his particular line.

Since Saturday, when the European Union settled a solar trade dispute with China on terms favorable to the People's Republic, we at least seem to know, more or less, what the global floor price for photovoltaics will be in the near future: 56 euro cents per installed watt of photovoltaic cell, or roughly US $0.75/W.

The EU settlement of a trade complaint brought by European PV manufacturers led by Germany's SolarWorld does not impose sanctions or tariffs on China. It does not satisfy the complainants and is seen as weak—typical of Europe's failures to advance its global interests with sufficient resolve. But that's geopolitics, which is a story for another day.

What's of interest here is the settlement's setting of a solar price that's well below the one-dollar-per-watt mark, often considered the breakeven point for PV market competitiveness. It will "allow Chinese companies to export to the EU up to 7 gigawatts per year of solar products without paying duties, provided that the price is no less than 56 cents per watt," as the Financial Times put it in its report. That is, Chinese producers will be permitted to collectively export 7 GW of solar cells to Europe each year—an amount equal to more than half of Europe's solar market—without incurring trade penalties. ("A trade deal with the European Union gives China 60% of the EU's solar-panel market," concludes a video interview on the Wall Street Journal site.)

The 7-GW ceiling on Chinese PV exports to EU states is essentially voluntary: Any exporters exceeding that limit will pay tariffs averaging 47.6 percent, as of August 6. That would seem to almost guarantee that Chinese exporters will not sell to Europeans at a price below 56 euro cents per watt. And, as Europe represents such a large fraction of the global solar market, the global PV floor price will be approximately the same.

In the short run, however, the effect of the European settlement may be that the Chinese will dump PV cells in the U.S. market at an even lower price. That is the opinion of Keith Bradsher, China correspondent for the New York Timeswho previously did outstanding reporting about the crisis in the U.S. auto industry and the festering troubles of Detroit.

Photo: William Hong/Reuters


Posted by E.Pomales P.E date 10/04/13 08:01 AM 08:01 AM  Category Energy & Petroleum Engineering

comments (0)  
0:1380891769:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 4, 2013

Counting the Sins of China's Synthetic Gas   Print



Heavy water use, threats of tainted groundwater, and artificial earthquakes are but a sampling of the environmental side effects that have tarnished North America's recent boom in natural gas production via hydraulic fracturing or fracking. No surprise then that in European countries such as the U.K. that are looking to frack for cheap domestic gas, the environmental protesters often arrive ahead of the drill rigs.

But countries seeking fresh gas supplies could do far worse than fracking. So say Duke University researchers who, in today's issue of the research journalNature Climate Change, shine a jaundiced spotlight on China's plans to synthesize natural gas from coal. Nine synthetic gas plants recently approved by Beijing would increase the annual demand for water in the country's arid northern regions by over 180 million metric tons, the Duke team concluded, while emissions of carbon dioxide would entirely wipe out the climate-cooling impact of China's massive wind and solar power installations.

"At a minimum, Chinese policymakers should delay implementing their synthetic natural gas plan to avoid a potentially costly and environmentally damaging outcome," says Chi-Jen Yang, a research scientist at Duke's Center on Global Change and the study's lead author, in a statement issued yesterday.

Synthetic gas plants use extreme heat and pressure to gasify coal, producing a combination of carbon monoxide and hydrogen. Steam and catalysts are then added to convert those gases to methane to produce a pipeline-ready substitute for natural gas.

It takes a whole lot of steam: According to Duke's estimates, China's synthetic gas plants will consume up to 100 times as much water (per cubic meter of gas) as shale gas production through fracking.

Relative greenhouse impact is harder to pinpoint because fracking's climate footprint remains controversial. Recent U.S. Environmental Protection Agency and industry studies dispute earlier results suggesting that fracked wells leak more methane—a potent greenhouse gas—than conventional wells.

What is certain, say Yang and his colleagues, is that synthetic gas production will be carbon intensive relative to conventional gas. Burning conventional natural gas to produce power releases two to three times less carbon into the atmosphere than when burning coal, but burning synthetic gas will be 36 to 82 percent dirtier than coal-fired plants.

Capturing and storing CO2 emissions could slash the climate costs, and China may have the technology to do it. Last year, Chinese power firm Huaneng started up the world's most advanced coal gasification power plant, which sports equipment to efficiently extract carbon waste from gasified coal. Similar technology could potentially enable China's synthetic gas plants to capture and sequester their CO2 instead of sending it up the stack.

Of course adding such equipment adds to construction and operating costs. Duke's team clearly doubts that Beijing will make synthetic gas producers go there.

Photo: David Gray / Reuters


Posted by E.Pomales P.E date 10/04/13 08:00 AM 08:00 AM  Category General

comments (0)  
0:1380891669:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 4, 2013

Production of Solar Panels Outpaced Investments Last Year   Print



Worldwide photovoltaic  (PV) solar panel production rose 10 percent in 2012 despite a 9 percent drop in investment, reports the European Commission (pdf). The numbers are imprecise, because solar panel makers use different types of production and sales figures, but the Commission authors estimate that producers added between 35 GW and 42 GW of PV capacity in 2012. The growth follows several years in which European governments have trimmed subsidies to solar power, prompting many private investors to shy away from the sector and driving some companies to bankruptcy.

Something about solar is special, though: investment in PV capacity still made up over half (57.7 percent) of new renewable energy investments, for a total of $137.7 billion, and analysts predict further growth through 2015. Part of the reason for investment's lag behind production is that producers added so much production capacity during the pre-recession subsidy boom that they need less capital investment to sustain high production levels. Making the hardware isn't the hard part.

Indeed, a recent Energy and Environmental Science study found that "soft" costs such as supply-chain efficiencies and regulatory barriers made up more of the difference in production costs between regions than hardware production costs. They predicted that the right business management and regulatory boosts could enable U.S. manufacturers to match China's. The EC report also shows optimism for PV in the United States: it figures U.S. PV capacity grew from 3.4 GW to 7.7 GW in 2012, almost doubling in response to a mix of legislative mandates and tax credits.

Courtesy European Commission Joint Research Center

Most of the rest of the growth comes from Asia, where governments are still in the first flush of support for solar energy. The EC report expects new guaranteed prices for solar power there, much like the prices which drove Europe's own solar boom in the mid-2000s. In Australia, about 10 percent of homes already have PV systems.

That doesn't mean the sun is setting on solar in Europe, though. After a pilot run near sunless London, Ikea announced that it would offer PV panels at all its United Kingdom stores. The firm figures consumers can earn £770 ($1247) a year between subsidies and savings on conventional electricity bills. Upfront costs are at least £5700, but typical panels last decades and should amortize installation costs within a little over 7 years. That should make up for some of the UK's gray days.


Posted by E.Pomales P.E date 10/04/13 07:57 AM 07:57 AM  Category Energy & Petroleum Engineering

comments (0)  
0:1380891533:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 4, 2013

Remembering Ray Dolby   Print



Ray Dolby, founder of Dolby Laboratories, died yesterday at age 80. He’s widely hailed as a pioneer in audio technology who revolutionized movie theater sound and took the hiss out of audiotape. Perhaps less well known are his contributions to video recording.

I was fortunate to spend several hours interviewing the brilliant and quiet but charming Dolby 25 years ago, at Dolby Laboratories in San Francisco. We talked about how his career in audio and video all got started. Here’s what I wrote then:

From "Ray Dolby: A Life in the Lab,” originally published in IEEE Spectrum, 25th Anniversary issue, 1988:

In 1949, Alex Poniatoff, founder of Ampex Corp., was sent a 16-year-old Redwood City, Calif., high-school student named Ray M. Dolby to run the movie projector for a meeting. Afterwards, Poniatoff demonstrated the company's newest audiotape recorder and asked the youth if he'd like to work for him during weekends and vacations.

Dolby had been tinkering with electronics since age 9—when he rigged up a Morse-code-type signaling system at his grandfather's farm—and he jumped at the chance. That started a career that has made his name a synonym for high-fidelity sound.

At Ampex, he helped develop a geophysical recorder, and after entering San Jose State College in California as an electrical engineering major in 1951, he continued to work at Ampex during his free time. Then, just before his sophomore year, Ampex started to develop a videotape recorder.

Dropping out of college, Dolby knew, would almost certainly mean a military draft during the Korean War, but "I had some ideas about how videotape recording should be done and I wanted to work on it," he recalls, so he took the chance. He was indeed drafted, on April 1, 1953, but "I had several months working on the videotape recorder, and we were able to lay down many of the basic parameters of the system, so it was worthwhile," he says.

It happened that Ampex was low on funds and put the videotape project on the shelf until late 1954. On Jan. 1, 1955, Dolby was out of the Army and back with the company. Concurrently with his work on the videorecorder, Dolby finished his BSEE degree at Stanford University in California. The recorder went into production late in 1956, and less than a year later Dolby went to Cambridge, England, where he earned a Ph.D. in physics, specializing in long-wavelength X-rays. His hobby of recording local musical groups with his Ampex 600 stereophonic tape recorder put him in high demand, because, he says, it was the only professional-quality recorder in Cambridge.

In 1965, after Dolby had spent two years with the United Nations Educational, Scientific and Cultural Organization in the Indian Punjab, he started Dolby Laboratories in the corner of a London dressmaking factory. He had $25,000 he had saved and borrowed, and four employees. The first products, which he marketed as "S/N (signal-to-noise) Stretchers," quickly came to be known as "Dolbys," much to his surprise—he does not mind, however, and says having his name recognized is sort of fun.

Today, Dolby Laboratories Inc. has some 350 employees with laboratories and production facilities in London and San Francisco. Just about every manufacturer of audio cassette recorders licenses Dolby noise reduction technology.

Dolby now goes to the office only once or twice a week, usually to pick up components from the storeroom. He prefers to work in the laboratory on the top floor of his San Francisco home. At 55, he has what he has always wanted: "A lab," he says, "in which I can mess around with my own ideas"—ideas that so far have netted him more than 50 patents.

In the 25 years since I wrote that profile, Dolby Laboratories became a public company, making Dolby a billionaire, and today has about 1500 employees. Besides those 50 patents, Dolby collected several Emmy’s, two Oscars, and a Grammy. He received the IEEE Edison medal in 2010 "For leadership and pioneering applications in audio recording and playback equipment for both professional and consumer electronics.”  For more on Dolby’s contributions to video recording, see "First-Hand: My Ten Years at Ampex and the Development of the Video Recorder”, an oral history by Fred Post at IEEE’s Global History Network.

Photo credit: Dolby Laboratories


Posted by E.Pomales P.E date 10/04/13 07:53 AM 07:53 AM  Category Engineers

comments (0)  
5:1380891298:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 4, 2013

A Video Tour of Fukushima Daiichi   Print


http://i3.ytimg.com/vi/sYKKnJmkm7o/default.jpg

Japan's crippled Fukushima Daiichi nuclear power plant has been back in the news in recent weeks, as radioactive water has leaked from tanks and contaminated groundwater has seeped through the soil toward the sea. TEPCO, the utility that owns the plant, has just released a video to explain this bad news, and to combat widespread rumors and misinformation.

As I learned while discussing the Fukushima situation on a public radio talk show, KQED's Forum, there's a huge amount of paranoia regarding the recent water leaks. The listeners of that San Francisco-based show called in to ask whether they could eat Pacific fish, and did not seem reassured when I and the other guests explained that high levels of radiation have only been found in bottom-feeding fish living near the coast of Fukushima prefecture.

I did my best to make clear that the Fukushima nuclear disaster poses little threat to San Francisco grocery shoppers, but that it is still inflicting profound hardships on the residents of Fukushima prefecture. More than 100,000 residents had to flee their homes in the first days of the accident, and the towns near the plant are still too contaminated to be inhabitable.  

The 20-minute video above begins by explaining the nuclear accident of March 2011, then goes on to discuss the decommissioning plan for the plant (which is expected to take 40 years in total), as well as TEPCO's attempts to improve its water storage and decontamination systems. 



Posted by E.Pomales P.E date 10/04/13 07:49 AM 07:49 AM  Category Industrial Engineering

comments (0)  
0:1380891104:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 4, 2013

Google's 'AdID' Aims to Replace Cookies for Tracking Web Users   Print



Google has been quietly baking a plan to replace the Internet "cookies" used to track Internet browsing activities with its own anonymous identification system. Google's proprietary identifier, called "AdID," could shift the balance of power in the US $120 billion digital advertising industry and shake up the debate over online privacy.

USA Today caught the scent of Google's plan through an anonymous source who described how AdID might work. Google's AdID would be offered to advertisers as an alternative to today's cookies that act as tiny digital trackers following the online activity of Web users. Like cookies, AdID would help companies and advertisers gather information about consumer interests and behaviors—a way to target people with online ads tailored to their personal tastes.

One huge difference would be that Google itself would gain much greater control over the information-gathering process that serves as the bedrock for digital advertising. The Internet giant already dominates online advertising, accounting for one-third of worldwide online ad revenue. Add to that the fact that its Chrome Internet browser has become the most popular in the world.

Many advertisers and advertising industry groups interviewed by USA Today seemed nervous about placing even greater control of digital advertising's fate in the hands of a few technology giants such as Google. Mike Zaneis, general counsel for the Interactive Advertising Bureau, worried about changes in AdID suddenly jeopardizing billions of dollars in digital ad spending.

Still, advertisers who agree to Google's terms for AdID might benefit from the use of a single identifier that can create more personalized profiles of Web users, said Mike Anderson, chief technology officer of Tealium, a company that specializes in helping digital marketers better manage online tags, in aWall Street Journal interview. The biggest benefit to advertisers will be eliminating the confusion that arises from the use of different third-party cookies; right now, advertisers can't tell if they're all tracking the same user or not.

Cookies have already stirred up controversy over how much information they allow companies and advertisers to collect about Web users. Microsoft's latest version of the Internet Explorer browser offers users a "Do Not Track" feature, and Mozilla has made blocking third-party cookies the default option in its Firefox browser since earlier this year. Apple has blocked third-party cookies on its Safari browser from the start, but the company introduced its own ad identifiers for the iOS mobile platform used on its popular iPhones.

Google's AdID plan would supposedly allow users to adjust their browser settings to limit ad tracking and even exclude companies on Google's list of approved advertisers, according to USA Today's anonymous source. Users would also have the choice to create a secondary AdID for "private" online browsing sessions—likely a reference to browser features such as Google's "Incognito" mode that disables cookies.

Giving consumers greater control over ad tracking sounds good in theory. But Jonathan Mayer, a Stanford University professor who studies online advertising and privacy, told the Wall Street Journal it was "unclear" whether Google's anonymous identifier would actually improve online privacy. And it's always worth keeping in mind that Google's wildly successful business model has been built upon gathering information, rather than limiting data collection.


Posted by E.Pomales P.E date 10/04/13 07:46 AM 07:46 AM  Category Software Engineering

comments (0)  
0:1380890946:1

Write a comment

your name*

email address*

comments*

You may use these HTML tags:<p> <u> <i> <b> <strong> <del> <code> <hr> <span> <div> <a> <img> 
verification code*




Friday, October 4, 2013

The Race To Get Your Hands Off The Wheel   Print



A fleet of cars and drivers whisks visiting journalists around the Frankfurt Motor Show's sprawling,144-hectare site. Judging by the number of exhibits of self-driving car technology this year, future visitors can expect their courtesy cars to lack drivers. It's a matter of putting together many existing technologies in an affordable, safe system.

One piece of that future system nearly clobbered a two-dimensional cutout of a child last week on a fenced-off piece of asphalt outside Hall 10. There, Bosch employees led by Werner Uhler were demonstrating a stereo optical camera system Uhler says could be cheaper than combined radar and optical systems used for collision avoidance today. The device is mounted on the front window of a testbed car, adjacent to the rear-view mirror. As the testbed approached a parked car, Uhler, seated in the backseat, said, "We will drive along...and suddenly a child will turn up and we will brake."

That was true.

The colorful cutout of a child burst into harm's way from behind the parked car, as promised. The testbed car, moving at 35 kilometers per hour, as per New Car Assessment Program (NCAP) guidelines, slammed the car to a full stop within a few feet of the cardboard child. The NCAP has reported on commercial so-called Emergency Autonomous Braking (EAB) since 2010. In the real world, cars spend a lot of time driving faster than 35 kph and EAB's role is more about damage control than damage avoidance. But sending the cutout child flying, even if it is less distance than a human-driven car would have, might undermine the clear-cut message Bosch—and Daimler, whichunveiled a very autonomous car at the Frankfurt show—and other manufacturers are sending: that they will soon drive your car better than you can. Put another way: future driving software, such as that announced by Audi, won't get bored or distracted in stop-and-go traffic.

It's not just luxury cars, either: Volvo showed off its latest moose-avoidance technology in Frankfurt, and Ford already offers Focus drivers a driver assistance packageNissan and a chorus of other car makers have declared that they expect autonomous cars to reach commercial viability by 2020.

Autonomous braking is one of a slew of new technologies leading toward what manufacturers call "assisted driving," and "highly automated driving"—or more bluntly, "self-driving cars." Cars have assisted their drivers since the commercial advent of cruise control in 1958. But in the last few years, cars entering the market have begun to alert drivers to impending parking accidents, maintain a safe distance from cars ahead of them, and stay in an assigned lane. Where they show their limits, says Michael Fausten, the director of Bosch's autonomous driving team, are unanticipated combinations of risky situations. Solving those will require heavy mathematical lifting, he says.

Next door to the Bosch demonstration was a Volkswagen self-parking car, which a frazzled driver might want to buy after, say, a near-miss with an errant child. But on a drizzly day, the car's handlers told me to come back when it was sunny. So I flagged down a press car, got inside, and asked the chauffeur whether all the self-driving car technology on display made him worry about his job security. No, he said, without elaborating.