id
stringlengths 30
34
| text
stringlengths 0
71.3k
| industry_type
stringclasses 1
value |
---|---|---|
2016-40/4016/en_head.json.gz/5204 | Advanced and Alternative Energy
Advanced Propulsion
Instruments - Controls - Electronics
Manufacturing Innovation
Tech Transfer - Research
OhioMeansJobs.com
Layoff leads to new career at leading-edge nanotech firm
Dave Malaska |
Matt Garver of Zyvex in Columbus, OH. Photos Ben French
Advanced Materials, Columbus/Central Ohio, Workforce Transition Columbus Like many workers in this unstable economy, the year didn't start off well for Matt Garver. But thanks to his expertise, a burgeoning technology and a helping hand from the state, his prospects for 2009 brightened. So did his career. Up until January, Garver's life had been going to plan. At 27, he'd earned his chemistry degree from Bowling Green State University, landed a job with an established firm in his hometown and had settled into lab life with Ashland Chemical in Dublin. But by the end of 2008, with layoffs rumored at the giant chemical company, he had begun feeling nervous about his future. That feeling was even more palpable when 2009 rolled around and Ashland announced in January that it would be forced to lay off hundreds of employees, including Garver.
Just that quickly, Garver wondered if he would ever work in his field again. "I had a little money put aside, so I thought maybe I'd travel for a while. During college, I made some money tiling floors and painting, so I thought I could do that for a while if I had to," he says. His "Plan B" was short-lived. Three days into his unemployment, he was preparing to hit the road when he got a call from a much smaller firm than the goliath Ashland, but one that staked its reputation on the cutting edge of exciting new technologies. Zyvex Performance Materials, based near the technical heart of Ohio State's west campus, was interested in Garver's skills as it looked to build its standing in an exciting new field of nanotechnology. Zyvex, which moved to Columbus from Texas in 2007, is the country's leading firm in the development of carbon nanotubes -- hair-thin structures that offer amazing strength combined with miniscule weight, making them essential to growth in fields ranging from electronics and optics to architecture and aeronautics. Riding the wave of technology's advances, Zyvex has put carbon nanotubes to work in everything from baseball bats to ship hulls and aircraft frames, making structures that are lighter but stronger, thinner but tougher than traditional materials. The firm was originally part of the Texas-based Zyvex Inc., a much larger company with materials, instrument and laboratory divisions. Recognizing a divergence of purposes, the divisions split into three separate entities in 2007. The materials group relocated its current Kinnear Road headquarters later that year with help from a $1-million grant through the Ohio Third Frontier, a $1.6 billion, 10-year state effort to establish Ohio as an innovation leader. Among the Third Frontier's purposes is helping incubate research efforts designed to accelerate the pace of high-tech commercialization and job growth within Ohio. In Zyvex's case, it was a large factor in deciding to call Columbus home. "At the time, Ohio was just becoming the focal point of what we call Polymer USA -- the equivalent of what Silicon Valley is to semiconductors," explains company President Lance Criscuolo. "Most all of the polymer companies are in Ohio, or within a short radius of the area, so Columbus put us in the center of the right ecosystem." The move helped make Ohio a worldwide competitor against better-funded efforts, such as the multi-billion dollar "Composite Park" now being built in eastern France -- with dozens of companies and university researchers looking into nanotechnology -- funded almost completely by French federal money. But the move also gave Zyvex a pipeline into a deep talent pool of scientists, like Garver, who had been working in similar fields with other companies such as Ashland, Hexion Specialty Chemicals and Owens-Corning, Criscuolo says.
With only 20 employees on site by the start of 2009, Zyvex was looking to expand its work. It dipped into that talent pool in February to hire Garver on a temporary service-contract basis. But with another $4.9-million grant from Third Frontier in March, it was able to bring the former Ashland chemist on full-time. "We started him with a service contract, but we quickly realized that he had the right skill set to make an impact with us as a technician, working with out materials and understanding our technology," says Criscuolo. Thanks in part to the latest grant, Criscuolo adds, further expansion is in the works, with plans to increase Zyvex's staff to 55 technicians within 18 months.
For Garver, it's not only meant full-time employment, but an entry into an exciting field in its nascent stages. With Zyvex, he's working on research and development of new materials, rather than just trying to find new uses for long-established materials. To the layman, Garver explains that's he's currently working on monomers -- the building blocks of polymers -- to ascertain how they react within chemical structures, then incorporating them into Zyvex's huge range of products. "It's everything I thought it could be and more," he beams.
He's also found a home. "It's an exciting time. Nanotechnology is fairly new, and it's really starting to blow up," Garver says. Zyvex's latest effort is a product called Avorex, which Criscuolo calls the firm's current "first-round draft pick." The second generation of its Avorex carbon fiber-and-glass composite expands the strength-versus-weight ratio, and holds promise in conductivity as well, Criscuolo says, making it ideal to replace electrical conducting components in aircraft like Boeing's planned 787 super-efficient, long-range jets. Meanwhile, the pace of new discoveries at Zyvex and within the industry is accelerating, with new technology finding its way into products with unprecedented speed. The excitement of seeing groundbreaking work go from design phase to application is one reason why Garver finds it very easy to see himself spending his entire career with Zyvex.
"Other places have more a corporate atmosphere," he explains, "but here we're like a small family. We work together really well, and it's more fun. It's probably why we're one of the leaders in the field." Give us your email and we will give you our bi-weekly online magazine. Fair?
Give us your email and we will give you our bi-weekly online magazine. Fair?
| About hiVelocity
| hiVelocity Links | 科技 |
2016-40/4016/en_head.json.gz/5260 | Culture Network: New Media - It's a jungle over at Amazon
Talented people in new media now have so much clout they pose a threat to their fledgling industry Amy Vickers
Sunday 12 September 1999 23:02 BST
The unexpected departure last week of Simon Murdoch, the head of Amazon Europe, will not only have an impact on the ever more competitive online book market, it will also have a significant effect on the health of the new media industry in the UK. Home-grown talent is a rare commodity and, given the current size the new media industry, there is hardly enough of this to go round. Even an ounce of experience in new media is now much sought after. Once someone picks up enough experience in one particular area, he or she is either poached by another company or they realise that they could reap some of the benefits by starting their own business and poaching the best talent for their start-up.It's a vicious circle. Once a person realises that only a small number of people are actually capable of doing their job, the bargaining power and clout can often become very destructive for the industry. The danger, of course, is that when such an invaluable person leaves, the inevitable questions about the future of the company are asked. Granted, this is true for most industries but, for a sector which is barely out of its infancy, yet another key departure to a start-up elsewhere is rough news. Why? Because it further weakens the pool of available talent.
With Amazon UK preparing to extend its product range to mirror that of its US site and battle against fierce competition from Bertlesmann Online, Murdoch's resignation can only have hit the company hard.
So what's Murdoch going to do? Take a break, find a market gap and then start his own business? Yup. Sources suggest that Murdoch already has plenty of funding for his next venture, given that he has recently relieved himself of the pounds 2m of Amazon stock which he gained when he sold Bookpages (which he set up in December 1996) to Amazon at the end of last year.
Amazon is hoping to lessen the blow by the appointment of caretaker manager Colleen Byrum, Amazon's head of customer services in the US, until it manages to find a "suitable replacement". Faced with the recruitment crisis and the unappealing Slough location, Byrum could end up running Amazon UK for quite a while. Part of the problem at Amazon UK, as with most UK extensions of US operations, is that the American influence in the company makes for difficult working conditions when dealing with a different local market.
Yankee invasion
A growing number of UK-based companies are now run by Americans lured by the charms of working in "cool" London, and the chance to succeed in what is, comparatively, still a burgeoning market. The old adage that the US is a year or so ahead of the UK, and so the American talent pool is much bigger, has a lot to do with this invasion, although the recent abundance of European venture capital, big salaries and company equity are also huge attractions.
News has it that BT, that bastion of all things corporate about the Internet, is looking to change the way it is perceived by young Net users and is drafting in young media managers from various fields. One of the first appointments was... guess? An American with years of experience of Internet marketing, of putting content on the web and in e-commerce. Taking the reigns as head of content and advertising, John Racza faces the task of making BT appeal to young Internet users through a funky advertising campaign and by aping the formula of music, sport and film on the content side.
Clearly, there is a market for these types of content sites, but it is a very crowded one, driven mainly by fan-based loyalty to a certain magazine or club. BT is aware that it needs to do something innovative and clever and, of course, it has the available cash. But whether it will create fantastic sites that blow the competition away has to be seen. Perhaps it should give something away for free. Internet calls might be a start, but as long as BT's main strategy is to make as much money as possible from Internet calls, that seems as likely as teaching a border collie to play backgammon.
Internet riches
A curious trend has emerged over the past few months in medialand, particularly at Murdoch-owned companies. Key personnel from traditional media, Sky's Mark Booth and the The Times's Toby Constantine, for example, have defected to new media. OK, they may just fancy a change, but many suspect the attraction is the price of Internet stocks. Internet IPOs are big news at the moment. The interest generated by Freeserve's flotation in the national press has been interesting to see, and stories that the Freeserve chiefs have become paper millionaires are attention grabbers. In the last week both QXL and Agency.com have confirmed their IPO plans and the anticipation of over-inflated share prices has been stirred again. QXL's IPO is expected to be the second-biggest Internet flotation in the UK, behind Freeserve.
With Freeserve now worth a fluctuating pounds 1.5bn, it will also be interesting to see how much value QXL's flotation can add to the two-year-old, loss- making online auction house. Analysts have tipped it to float at around pounds 400m, which would see its founder, former journalist Tim Jackson, being added to the growing list of Internet millionaires. But while QXL undoubtedly has thousands of companies and people trading via its site, it is baffling that is should be worth so much when its 1998 revenues were only pounds 2.5m.
Compare this to the $75m (pounds 46m) IPO of global interactive giant Agency.com, which took revenues of $70m (pounds 43.2m) in 1998 and is far more established with a 750-strong global workforce. It seems the only way to make a killing these days is to develop an intangible online brand and spend lots of money on offline advertising prior to flotation. I'd better finalise that business plan.
[email protected] More about:
Amazon (Company) | 科技 |
2016-40/4016/en_head.json.gz/5275 | IBM buys Emptoris for contract managment, supply software
Microsoft's 5 biggest weaknesses
2012 tech predictions: From IDG's editors worldwide
10 biggest ERP software failures of 2011
Where can I find tech services providers that focus on really small businesses?
The move adds to IBM's family of commerce-related applications
Supply Chain Management (SCM)
IBM has signed a deal to buy supply and contract management software vendor Emptoris in another bid to fill out its growing catalog of business-to-business and business-to-consumer commerce technologies, the company announced Thursday. Terms of the deal, which is scheduled to close in the first quarter of next year, were not provided.The move closely follows IBM's $440 million purchase last week of DemandTec, maker of analytics software that retailers use to fine-tune their product offerings and pricing strategies.[ Discover what's new in business applications with InfoWorld's Technology: Applications newsletter. | Get the latest insight on the tech news that matters from InfoWorld's Tech Watch blog. ][ The InfoWorld review roundup: AWS, Microsoft, Databricks, Google, HPE, and IBM machine learning in the cloud. | Get a digest of the day's top tech stories in the InfoWorld Daily newsletter. ]Emptoris has about 725 employees and 350 customers, including ADP, Kraft and American Express. In recent years, the company suffered a $7 million judgment against it in connection with a patent case filed by its competitor, Ariba. IBM's move to buy Emptoris comes shortly after the launch of a new version of the smaller company's product suite, which it dubbed a "strategic supply management platform for the future."Features include an overhauled user experience, including support for many browsers, the iPad and integration with Microsoft Office; a program management module; a global repository for data regarding suppliers; and BI (business intelligence) functionality based on SAP's Business Objects software. It's not clear whether IBM will look to swap out the last feature with its own Cognos BI platform.During the early and mid-2000s, Emptoris set out to be the top suite vendor for strategic sourcing, and largely succeeded, said Jason Busch managing director of advisory firm Azul Partners and editor of the Spend Matters blog."It was one of the best, if not the best, at that time," said Busch. "They did a great job of convincing the market they had a better mousetrap." Busch also competed against Emptoris years ago while working for FreeMarkets, a company acquired by Ariba in 2004. Ariba's patent case came at a "horrible" juncture for Emptoris, he added. "They were unable to raise the high-valuation funding rounds they were able to before," due to the uncertainty, he said. Marlin Equity Partners took a majority stake in Emptoris in 2009.Emptoris has since gotten back on track, Busch said. "It's one of the stronger products in the sourcing market today."IBM's announcement drew a cool reception from Tim Minahan, chief marketing officer for Ariba."It validates the strategy we've been pursuing," he said in an interview. "We continue to compete against and have beat Emptoris quite handily in the past. This is a change in business cards for them, not much else. There's still a lot of runway for IBM to travel. Acquiring a sourcing company isn't going to do it. We have the world's largest web-based trading network."However, IBM may really be intent on using Emptoris to compete more effectively in the procurement BPO (business process outsourcing) market, Busch said. "This certainly puts pressure on Accenture in this market, and the other significant BPOs as well, who may or may not own significant software assets."Meanwhile, sourcing software providers, both pure-play companies and ERP (enterprise resource planning) vendors such as Oracle and SAP, may have less to worry about. "This presents a huge opportunity for the best-of-breed as well as the ERPs," he said. "Typically when IBM acquires software the rate of innovation is not what it was before." | 科技 |
2016-40/4016/en_head.json.gz/5341 | Podcasts: Podcast: NASA's Aquarius Mission to Fly High Over the Salty Seas June 2, 2011
Music open.
Narrator: Flyin' high above the salty seas. I'm Jane Platt with NASA's Jet Propulsion Laboratory in Pasadena, Calif. The Aquarius mission team is prepping for launch of an instrument that will orbit Earth and measure the salt content of the ocean surface. And what will that tell us? Let's ask Amit Sen, the Project Manager for Aquarius at JPL. Hi, Amit.
Sen: Hi, Jane. It's an exciting event that's coming up, and we have been so excited about this mission because it's all about Earth, the planet that we live on and the planet that we have to maintain and understand. And salt is a key factor that affects deep ocean currents, which transport heat throughout the world's oceans, and that in turn affects our climate. And that's why it's so important internationally for people to find out.
Narrator: And that is involved with such climate patterns as El Nino and La Nina that we're pretty familiar with here on the West Coast.
Sen: That's exactly right, and El Nino and La Nina are pronounced effects that we see lately. And there are some effects that we do not know about. We really do not know or understand the ocean as much as we think we do. And so this exploratory mission is going to only gain us more insight, more knowledge, into the ocean process, the water cycle and the global circulation.
Narrator: And the more we know about our oceans, as you say, the more we understand our climate and climate patterns. Sen: Exactly. Since the climate patterns are linked to precipitation, and evaporation, and those are linked to the water cycle, that's what we want to understand. As you know, the sun heats the ocean at all times, and the atmosphere touches the top of the ocean all the time, so the evaporation happens from the surface of the ocean. It goes up and it raises, and the trade winds, the winds carry it over land, and the precipitation happens. And then the precipitation goes through runoffs and back into the ocean. The water cycle.
Narrator: And this is the first time that NASA has a mission to study salinity.
Sen: Absolutely. This salinity measuring mission is the first time for NASA. NASA has measured, over the last 25 years, temperature, winds over water, color of the ocean, but they have not measured one parameter that has largely been missing--the salt content of the ocean and the transport factor. The missing link.
Narrator: And this is an international venture.....
Sen: It is an international venture, and along with the U.S, there are five other international partners. Argentina, the Argentine space agency, the Brazilians, they have a Brazilian space agency, the Canadian space agency, the Italian space agency, and the French space agency. All of them, together with us-six international groups, participate in this mission, bringing in various instruments. NASA brings in the prime instrument, which measures sea surface salinity. The rest of the the observations are from different countries maintaining also, from their country's point of interest, their natural and hazardous information that they want to see from space.
Narrator: And just in really layperson terms, the instrument measures salinity in the ocean...how?
Sen: OK, the instrument is basically a radiometer, what it does is for a tuned frequency, we are looking at a microwave emission coming out from the ocean. It's almost like tuning to a radio station for a certain frequency. So we know the ocean reflects back into various radiations. So one of the radiation areas that we look at is a microwave radiation. And this microwave radiation is detected 408 miles above the Earth, looking for a pinch of salt in the ocean. If you took a pinch of salt and put it in a gallon of water, we could detect that sensitivity from 408 miles above the Earth. And that's quite a feat by itself.
Narrator: That's amazing. Anything else you want people to know as you're getting ready get ready for launch about the mission, about the preparation? Sen: Well I'm sure everybody is going to be keenly looking at all the websites and the launch preparation, I would like them to look at the website and keep yourself more and more aware of what Earth does and how we understand the Earth. Narrator: Right, and that website is the Aquarius mission site, online at www.nasa.gov/aquarius . And we also have another fun way for people to keep track of the mission. Tell everybody about the free iPhone app that people can get at the App store.
Sen: There's also a free iPhone app, as you mentioned. If you go to the App store on Apple. But this only works on the Apple iPhone, the iPad, and whatever I's. And so you go to the Apple Store. And if you do a search of JPL and Aquarius together, and it will come up with the icon and you can download it for free. It absolutely takes no money to download the app. And you can not only see in your palm of your hand how the salinity data, that we think, would look like, and you can interact with it. And you also can see the countdown clock to the time for launch, and you will be able to also learn about it, various aspects of what I spoke about, and also pictures and videos of the recent activities at the space centers.
Narrator: Alright, well thank you very much, I know you're extremely busy getting ready for launch, so I appreciate your time, and best of luck to you and the team.
Sen: Thank you very much, Jane.
Narrator: And just a quick followup on that Aquarius iPhone app-it's also available at www.jpl.nasa.gov/apps .Thanks for joining us for this podcast from NASA's Jet Propulsion Laboratory.
Music close.
Follow @NASAJPL on Twitter
Tweets by @NASAJPL
Get JPL updates
Register today and receive up-to-the-minute e-mail alerts delivered directly to your inbox. | 科技 |
2016-40/4016/en_head.json.gz/5366 | Juneau Icefield Research Program founder, long-time director passes away
By Matt Miller, KTOOJanuary 29, 2014Featured News, Federal Government, Government, Military, Science & Tech No Comments
0 0 1 Dr. Maynard Miller, founder and long-time director of Juneau Icefield Research Program. KTOO photo.
A researcher who pioneered work on Southeast Alaska’s glaciers nearly 70 years ago has passed away.
Dr. Maynard Miller died January 26th at his home in Moscow, Idaho. He was 93 years old.
Miller was a founding member of the Juneau Icefield Research Project in 1946. He became the long-time director of what was later called the Juneau Icefield Research Program that has included the longest, continuous study of glaciers anywhere in the world.
Former Juneau resident Lance Miller said his father was passionate about the importance of such basic research and he even considered one last visit to the Juneau Icefield and the surrounding glaciers.
One of the things was always the mass balance which is essentially how much snow falls and much snow melts and ablates, and what that means for the life of a glacier, and how climate change has affected that. One of his initial goals and the long-term goal of the program was the health of glaciers, if you will. They’re really a litmus for what’s going on in the world for climate. So, still following up on those basic research questions.”
http://s3-us-west-2.amazonaws.com/ktoo/2014/01/28maynard1.mp3
[box type=”shadow”]Related story: JIRP director developed early interest in Southeast Alaska glaciers that includes KTOO feature on JIRP that was broadcast in 2000.
JIRP includes eleven permanent field stations and dozens of temporary camps spread out over 5,000 square miles.
In an interview with KTOO on the icefield in 1999, Dr. Miller explained that the program originally started with the Office of Naval Research to study possible military operations in the Arctic such as ballistic missile submarines running under the sea ice.
It’s the only program of its kind in the world, and it’s one that evolved out of absolute necessity. In those early years, we had to live out of tents, particularly in the Navy contract years. It was very difficult to do a lot of that research because there was so much housekeeping and logistics necessary. We estimate now that 80% of our time up here on the Juneau Icefield in those early summers in the ‘40s and ‘50s was spent on survival, housekeeping, and logistics. Leaving 20% for research and other activities.”
Dr. Miller said the military essentially lost interest after the launch of Sputnik that heralded the dawn of the Space Age. JIRP then became an academic and research teaching program for training the next generation of new scientists.
I would say in the fifty years our program that, frankly, very likely 50% of the leaders in arctic and polar science in the United States have come from our program.”
The best and brightest college students are selected to help during an eight-week immersion program. It includes helping out professional scientists from around the world in arctic ecology, glaciology, environmental science, geology, and biology who converge on the Juneau Icefield every summer to conduct their research.
Miller said that it is the perfect place to watch the push and pull of the counter-rotating Aleutian low and the Arctic high pressure systems.
Those two gigantic swirls or gyres in the atmosphere meet right along the coast of Alaska in the spring and the summer right where these icefields are. So, that the interaction between them shifts back and forth, and back and forth — we call that the Arctic Front – across the icefield, revealing on this icefield the climatic history of our atmosphere and climate change on the whole planet throughout the year. So, it’s a remarkably sensitive field site for observing that kind of phenomenon.”
From the time he was about six years old until high school, Lance Miller said that he and his brother Ross often accompanied their parents during their summer stays on the icefield.
It took awhile to figure out that this is pretty unique. My brother and I have talked about that since, like ‘Wow, this was kind of different!’ You’re running around at these camps and there’s snowmachines, and cravasses to crawl into, and skiing in the summer. It was a great place to be let loose, so to speak, and hopefully contribute as well.”
Dr. Miller retired as the head of JIRP in 2009 and Dr. Jeffrey Kavanaugh of the University of Alberta was appointed as the new permanent director in 2011.
Lance Miller said his dad always had a positive outlook. His grandchildren even remarked during a recent holiday visit that they were impressed by his “unrelenting enthusiasm”.
Dr. Miller leaves behind his sons Lance and Ross and their families that include a total of four children. Miller’s wife Joan passed away about six years ago.
Arrangements for services are pending.
http://s3-us-west-2.amazonaws.com/ktoo/2014/01/MMILLER.mp4
This segment profiling the Juneau Icefield Research Program aired as part of KTOO-TV’s Rain Country program in 1988.
Link to website for Juneau Icefield Research Program | 科技 |
2016-40/4016/en_head.json.gz/5403 | Science Science Now Neanderthal genes helped modern humans evolve, studies suggest
Neanderthals and modern humans share a sliver of genetic code, studies show Frank Franklin / Associated Press A reconstructed Neanderthal skeleton, right, and a modern human version of a skeleton on display at the Museum of Natural History in New York. A reconstructed Neanderthal skeleton, right, and a modern human version of a skeleton on display at the Museum of Natural History in New York. (Frank Franklin / Associated Press) Geoffrey Mohan
Mating between Neanderthals and the ancestors of Europeans and East Asians gave our forebears important evolutionary advantages but may have created a lot of sterile males, wiping out much of that primitive DNA, new genetic studies suggest. The comparison of Neanderthal and modern human genomes, published online Wednesday in the journals Nature and Science, identified specific sequences of altered DNA that both Neanderthals and several hundred modern Europeans and Asians had in common.
Those stretches of common heritage offer intriguing hints at what borrowed code helped modern humans adapt, and what was eliminated.The strongest remnant of our Neanderthal heritage appears to be centered around as-yet unknown changes in skin and hair that likely proved advantageous, the two studies suggest.
“The group of genes that stand out are genes that code for things in the skin, particularly keratin, which is a structural component of skin, and another group of genes that are keratins in the hair also pops up,” said geneticist Svante Paabo of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, one of the authors of the research published online Wednesday in the journal Nature. “So it suggests that something came over from Neanderthals to present-day people that had to do with the skin and was advantageous and rose to high frequency.”The two studies add detail to a growing consensus that modern human ancestors did more than bump elbows and eventually replace the Neanderthals that preceded them out of Africa. They mated with them around 50,000 years ago — a series of as many as 300 encounters that has left a 1% to 3% Neanderthal footprint on the genome of anatomically modern Europeans and Asians, the researchers said.“Individually, we are a little bit Neanderthal,” said Joshua M. Akey, a population geneticist from the University of Washington, and lead author of the study published in Science. “Collectively, there is a substantial part of the Neanderthal genome that’s still floating around in the human population that’s just shattered into different pieces, and everyone has slightly different parts.”The reports build on the publication in December of the full genome of Neanderthals that showed that they were genetically closer to modern Europeans and Asians than to modern Africans. The best explanation for that phenomenon was gene flow — a fancy term for interbreeding between the divergent species, which shared a common ancestor some 300,000 to 500,000 years ago.
At least 20% of the Neanderthal genome "introgressed" into the genome of our European and Asian ancestors, and East Asians retained slightly more of it, according to Akey’s analysis, based on genomes from 379 Europeans and 286 East Asians.The slightly larger Neanderthal footprint among East Asians is not easily explained without a second "pulse" of gene transfer after they parted from Europeans, Akey's study suggests. “It’s a two-night-stand theory now,” Akey said.As few as 300 matings could have inserted the Neanderthal heritage in the genome of the common ancestors of Europeans and East Asians, Akey estimated, though he cautioned that there was a lot of uncertainty surrounding such an estimate.“Those 300 matings could happen in a single generation; they could be spread out over 10 generations. We can’t distinguish,” Akey said.So where did all the Neanderthal genetic code go? Much of it went to the grave with sterile males, who carried it on their single X chromosome, said Paabo. “Males have only one X chromosome, so if they get a slightly bad version of this from Neanderthals, they have no other version of the X chromosome to compensate with,” he said.Without strong selection pressures, the Neanderthal portion of the modern European and East Asian genome could have been twice as large as is evident now, said Paabo’s co-author, Sriram Sankararaman, a statistical geneticist at Harvard Medical School who analyzed more than 1,000 European and Asian genomes.“The 2% we see today is what’s remaining after there’s been some purging,” Sankararaman said. “We think that it was reduced by about a third.” That purge of Neanderthal DNA is “a huge amount in a relatively short period of time,” he said.The Neanderthal genome project has made large advances in the past several years in the understanding or our closest human relatives, who vanished about 28,000 years ago.Short-bodied and brutishly strong, with oval skulls featuring pronounced brows and nasal cavities, the Neanderthals were well adapted for colder climates, using flaked tools, and hunting to supplement forage unavailable in colder months.For decades, paleontologists and geneticists found little evidence of inter-breeding, but as methods and tools improved, evidence began to accrue in both the fossil record and genetic analyses.The first draft of a full Neanderthal genome, published in 2010 by researchers at the Max Planck Institute, was followed last year by a far more detailed description of the genome using samples from multiple Neanderthal fossils. More recent work in December established that Neanderthals were more related to Europeans and East Asians than to modern Africans, suggesting gene flow between them. Analysis revealed that about 1% to 3% of the non-African genome could be attributed to Neanderthals, with East Asians having a slightly larger share than that of Europeans.“It was established a couple years ago that there was a small but significant admixture with Neanderthals, but that doesn’t mean that the genes that were brought into modern humans had any function, that they were an improvement on what modern humans had," said Montgomery Slatkin, a UC Berkeley biologist who has done similar research on Neanderthal genetics but was not involved in either study. "But now there is convincing evidence that indeed some of them at least were selected in humans.”
Genes linked to several modern diseases were among the Neanderthal legacy, including those correlated to Type 2 diabetes. But how much of a risk we inherited is debatable — the diabetes gene likely helped us survive food shortages, and may have proved detrimental as food became all too abundant in recent time.Other studies have traced important immune system genes to Neanderthals and another extinct group, the Denisovans.
While both studies highlight the DNA that anatomically modern humans have in common with an extinct species, ultimately researchers are more interested in what makes us modern humans.“That’s what really burningly interests me in the coming years,” Paabo said.“Our genomes are kind of this new territory for excavating the remains of unknown hominins,” Akey said. “I’m pretty confident that we’ll be able to find new twigs on the family tree."In the meantime, we may want to give Neanderthals their due, Paabo suggested.“They are not sort of fully extinct, if you will,” he said. “They live on in some of us today — a little bit.” Copyright
Biotechnology Industry
Fossils radically alter ideas about the look of man's earliest ancestors
Neanderthals put time and effort into burying their dead, study finds
Fossil teeth study takes bite out of Neanderthal-European link | 科技 |
2016-40/4016/en_head.json.gz/5511 | SOURCE: Razer
Koenigsegg and Razer Push Performance Obsession With Design Partnership Supercar Icon and Cult Gaming Brand Collaborate on Limited Edition Gaming Laptop
CARLSBAD, CA--(Marketwired - Feb 25, 2014) - Razer™, the global leader in gaming devices and software, announced today a partnership with Swedish supercar manufacturer Koenigsegg to produce a series of not-for-sale Koenigsegg limited edition Razer Blade laptops. The alliance is emblematic of the two brands' common preoccupation with engineering perfection.
"Every single detail of a Koenigsegg car is measured against our continuing goal to enhance vehicle performance," says Koenigsegg Automotive AB CEO Christian von Koenigsegg. "This is reflected in everything we do. Nothing is insignificant. We find our doppelganger in Razer -- a company inspired beyond conventional reason to design products for extreme functionality and fun."
The relationship is tethered between Koenigsegg's headquarters in Ängelholm, Sweden and Razer's San Francisco design offices, through which Razer's roster of in-house scientists and engineers are focused on developing cutting-edge technology. The partners plan to inaugurate the Razer-Koenigsegg relationship with a limited edition of custom not-for-sale Blade laptops, which a select few Koenigsegg owners and Razer fans will get to use. The gaming system will boast power that belies its ultra-thin chassis, which will be CNC machined and appointed with unmistakable Koenigsegg cachet.
The Koenigsegg Razer Blade will be unveiled at the Geneva International Motor Show in March 2014, where it will show off its performance, letting a lucky few drive virtual Agera R´s in the best possible setting. Both Razer and Koenigsegg see this collaboration as a first step in a long relationship from which more exciting products can be born and brought to market through the collaboration. "As far as I'm concerned, Koenigsegg is the archetype by which all design innovation may be measured universally -- a company hell bent on combining beauty, performance and sheer power," says Min-Liang Tan, Razer co-founder, CEO and creative director. "Our shared obsession with extreme performance makes working with Christian and his crew an exciting project and an honor."
To celebrate the inauguration of the Koenigsegg-Razer relationship, Razer will be giving-away two Koenigsegg-branded Razer Blades. Details about the promotions will be available online at www.razerzone.com/koenigsegg and www.razerzone.com/koenigsegg-cn. Additionally, several custom wallpaper images are available to the public, free of charge, at www.razerzone.com/downloads.
http://www2.razerzone.com/resource/downloads/resc/Blade_Koenigsegg/RZR_Blade_Koenigsegg.zip
About Razer:
Razer™ is the world leader in devices and software platforms that enable, connect and entertain a worldwide community of electronic entertainment enthusiasts, 24/7. With a rich history in gaming, music and design, Razer's award-winning technology includes voice-over IP and other social applications and devices; programs for music production, performance and enjoyment; cloud-based solutions for customizing and enhancing computer systems and related product performance; and a wide array of award-winning laptops, tablets, audio products, hardware and accessories, and apparel.
Razer applies its design and engineering resources to develop products that support the on-the-go lifestyle inherent to its contemporary, global, technophile community, fulfilling their immediate wants and needs wherever they are. At Razer, everything it does resonates with the Company credo: For Gamers. By Gamers.™
For more information, please visit http://www.razerzone.com/.
About Koenigsegg Automotive AB:
Founded by Christian von Koenigsegg in 1994, Koenigsegg Automotive AB designs, engineers, and hand builds the most technologically advanced, hypercars in the world. Armed with the belief that perfection is a moving target, Christian von Koenigsegg's endless drive to create the perfect supercar has led to innovations such as the most power-dense engine in the world, the first green hypercar, and the Triplex Suspension.
Koenigsegg creates just a handful of its bespoke hypercars each year at its state of the art facility located in Ängelholm, Sweden, that was once home to the Swedish Air Force's JAS 39 Gripen fighter jets. Koenigsegg's latest offerings, the Agera and Agera R, follow Koenigsegg's legacy and set new hypercar benchmarks for performance, comfort, practicality, and sheer driving enjoyment. Koenigsegg - The Spirit of Performance.
Razer - For Gamers. By Gamers. | 科技 |
2016-40/4016/en_head.json.gz/5640 | Follow this link to skip to the main contentNASA - National Aeronautics and Space Administration› en Español› Help and PreferencesNASA Home | Centers | Ames Home | News | Releases | 2003SendBookmarkPrintSearch AmesText Size May 30, 2003 David Morse/Ann Sullivan NASA Ames Research Center, Moffett Field, Calif. Phone: 650/604-3039 or 650/604-9000 RELEASE: 03-42AR ALLEN FLYNT NAMED NASA AMES DEPUTY CENTER DIRECTOR G. Allen Flynt was today named deputy director at NASAÕs Ames Research Center, Moffett Field, Calif., effective Aug. 3. Dr. Steven Zornetzer, who has been acting deputy center director since November 2002, was named Ames' deputy director for research. Flynt comes to Ames from the NASA Johnson Space Center (JSC) in Houston, where he served as manager of the Extra-Vehicular Activity (EVA) Project Office. As Manager of JSC's EVA Project Office, Flynt developed hardware, integration standards, capabilities, services, techniques, templates, and other information necessary to provide spacewalking services to the Space Shuttle and International Space Station programs. More recently, Flynt was among the senior NASA officials tasked to help direct debris recovery efforts for the Space Shuttle Columbia in Lufkin, Texas. "Allen's leadership was vital in the unprecedented cooperation we witnessed between federal, state, and local organizations during the Columbia recovery effort," said NASA Administrator Sean O'Keefe. "His management and technology integration experience will be invaluable to our colleagues at Ames." "Allen brings a wealth of experience from the human space flight community at NASA," added Dr. Jeremiah F. Creedon, Associate Administrator for the Office of Aerospace Technology at NASA Headquarters in Washington. "He's clearly demonstrated a capacity to conquer management challenges. Scott Hubbard knows he's fortunate to have a proven leader on his Ames management team." Flynt began his NASA career in 1986 as an analyst in the National Space Transportation System program control office at JSC. He next worked as an analyst in the Orbiter Project Office before moving on to serve in the Space Shuttle program control arena. He managed the extra-vehicular activity mobility unit project from 1992 until 1995 and has held various leadership and management roles in the International Space Station Program and EVA Project offices. "I am delighted to have Allen join us as part of the senior management team at NASA Ames," said Ames Center Director G. Scott Hubbard. "This appointment and AllenÕs expertise and perspective provide us with a tremendous opportunity to develop a closer working relationship between the human space flight and research and technology worlds." "I am honored and excited by this opportunity to take on this important new leadership assignment at a premier NASA research facility such as Ames," said Flynt. "I am very optimistic that this will open up new avenues for collaboration, partnership and technology infusion into NASA missions." Flynt holds a bachelor's degree in industrial engineering from Texas A&M University, College Station, and has been honored with numerous awards. Those awards include the NASA Exceptional Achievement medal, the Victor Prather award, outstanding and superior performance awards, and the "Silver Snoopy," which is presented by the astronaut corps for outstanding service. To see images of G. Allen Flynt, please click here. -end- - end - text-only version of this release | 科技 |
2016-40/4016/en_head.json.gz/5705 | HomeEnterprise I.T.Mobile TechApplicationsHardwareWorld Wide WebNetwork SecurityCloud ComputingMicrosoft/WindowsApple/MacPersonal TechBig DataCRM SystemsGovernmentCommunicationsContributed Content You are here: Home / Enterprise I.T. / Otellini Retiring from Helm at Intel
Intel CEO Paul Otellini Retiring from Helm
Intel CEO Paul Otellini, a major fixture in the technology universe, has announced that he will retire as a company officer and director. His retirement, scheduled for the stockholders' meeting in May, follows a career of nearly 40 years with the industry's dominant chipmaker.
Andy Bryant, chairman of the board, said in a statement that Otellini "has been a very strong leader," and noted that he has only been the fifth CEO in the company's 45 years of existence. The retirement date provides six months for a transition to a new CEO.
'New Generation'
For his part, Otellini, 62, told news media that, after nearly four decades with Intel, including eight years as CEO, "it's time to move on and transfer Intel's helm to a new generation of leadership."
Otellini's successor will find a rapidly changing environment in the semiconductor industry. Intel's profit in the third quarter fell 14 percent due to higher expenses and a drop in personal computers, as the industry moves toward mobile and Intel tries to compete in that space. Intel remains king for desktop and laptop PCs, whose lessening demand may pick up in response to the recent release of Windows 8.
His successor will also have a significantly large pair of shoes to fill. During the time he's been CEO -- from the second quarter of 2005 through the present -- the company has seen $107 billion in revenue from operations, distributed $23.5 billion in dividends, and increased the quarterly dividend 181 percent from $0.08 to $0.225. The company had record annual revenue and net income during this time, growing from $38.8 billion to $54 billion.
'Hand-Picked'
Otellini generally receives high marks for streamlining the company's operations and cost structure, originating and launching the Ultrabook form factor and spec, increasing partnerships, and launching the first smartphones and tablets with Intel processors. The company also achieved dramatic improvement in the energy efficiency of its processors, and such innovations as High-K/Metal gate and 3D Tri-gate transistors, during his tenure.
Laura DiDio, an analyst with Information Technology Intelligence Consulting, noted that Otellini had been "hand-picked" by the previous CEO, Craig Barrett, who himself had been hand-picked by the legendary trio that started the company -- Robert Noyce, Gordon Moore and Andy Grove.
She pointed out, unlike some recent, forced transitions in other companies, this one is "natural," as indicated by the fact that his departure is six months hence. DiDio expressed some surprise that the search will be both internal and external. She said that she thought "they'd already have someone in mind," although she did note that three new promotions could position their occupants for the top job at some point.
Along with the announcement of Otellini's retirement, Intel said it was promoting three executives to the position of executive vice president. These are Renee James, head of software, Brian Krzanich, COO and head of worldwide manufacturing, and Stacy Smith, CFO and director of corporate strategy.
Read more on: Intel, Paul Otellini, Andy Bryant, Processors | 科技 |
2016-40/4016/en_head.json.gz/5713 | Psychologist Links Andes Crash and Survival Story to Human Evolution in New Book Article ID: 607463
Released: 10-Sep-2013 12:00 PM EDT
Source Newsroom: Southeastern Louisiana University
Andes crash, Survival, cannabilism, Human Evolution, rituals and human evolution, human survival + Show More
Newswise — HAMMOND – The story of the Uruguayan rugby team, whose airplane crashed in the Andes Mountains in 1972 and had to resort to cannibalism to survive until their rescue, has strong roots in the history of human evolution, according to a Southeastern Louisiana University psychology professor.The 16 young men – who endured 72 days of bitter cold, a lack of food and other resources – saw their lives suddenly reduced to the basics of daily survival. The men, most of whom were members of the Old Christians Rugby Club from Montevideo, Uruguay and alumni of Stella Maris College, were on their way to play a rugby match in Chile, were the only survivors of the crash that carried 45 people, including the crew and family members. The others either died during or right after the crash or in a major avalanche that occurred several days later.“They survived by accessing the resources of their own human legacy, which was enhanced because they were already a team,” said Matt J. Rossano, author of the just published book “Mortal Rituals: What the Story of the Andes Survivors Tells Us about Human Evolution.”Rugby was introduced at Stella Maris College by the Irish priests who taught there and favored the game over the Latin American-preferred game of soccer. For this team, Rossano said, rugby was similar to a more ancestral way of thinking and prepared them for the rigors of survival. Rugby, he said, requires a smothering of the ego and complete submission into a team effort.“Our human ancestors, Homo erectus, were odd-ball primates whose fate depended on their smarts, tools and the ability to work together,” he explained. “Left on his own, Homo Erectus didn’t have a chance. The group was life, while separation was a death sentence. The same applied to the Andes survivors.”He said the group was saved through teamwork, faith and a well-organized social system that was reinforced by ritual. Rossano discusses how a hierarchy of leaders and workers was established among the group, which included two natural leaders; several lieutenants, mostly young boys who did the odd jobs assigned by the leaders; the medical crew who took care of the injured; a group he describes as “workers and parasites,” the complainers who drifted into complacency and a state of constant complaining; and the expeditionaries, who would be seeking a way through the mountains for rescue.All of them together formed a hierarchical community with the primary focus of survival, a trait likely inherited from a common ancestor to early man and all the great apes, he explained.Rossano – who writes frequently on religion, science, evolution and human behavior – relied heavily on first-person accounts of the survival story mainly taken from the books “Alive: The Story of the Andes Survivors” by Piers Paul Read and “Miracle in the Andes: 72 Days on the Mountain and My Long Trek Home” by Nando Parrado, one of the survivors.As the meager supplies in the plane dwindled, the group was forced by the circumstances to decide to break a long-standing taboo, the consumption of deceased fellow passengers. “Then days into their survival, they confronted what had previously only been private thoughts of just cautious whispers,” he said. “That was eating the dead. It eventually was suggested by one of the medical students on the team.”“Certainly this was taboo, but they had to push aside revulsions and their own deep conflicts,” Rossano added. “They saw it as the only way to survive. For some of the more devoutly religious ones, it was seen as a moral duty to try to survive. That became the only relevant issue.”As all of the team members were from a Catholic tradition, they employed ritual to keep their spirits up in the face of worsening conditions. Nightly discussions and debate followed by rosary recited in unison in the fuselage of the plane helped maintain a unity of purpose, explained Rossano, author of the “Supernatural Selection: How Religion Evolved.”“This ritual meant different things to different people,” he said. “For the devout, it was a heartfelt petition to their God for strength, mercy and even a miracle. For the skeptical, it was a means of mental relaxation, something that helped preserve their sanity and help them sleep. No one put himself above the ritual. The solidarity of the group was more important than any single person’s doubts or misgivings about the supernatural. They used these rituals and routines to recognize their fates were interlocked. They were family.”Ritual was employed by the expeditionaries whose job it was to trek from the crash site, over the mountains in order to find help. Eight times they left the wreckage in attempt to climb the mountain peak and find the valleys of Chile. As each attempt failed, they knew they had to continue.‘Ritual can harness the mind’s power to endure,” said Rossano. “Their ally was their minds.”Using the ritual of focusing on one step at a time, frequently accompanied by a prayer used as a mantra, the expeditionaries pushed on. Rossano said they learned what Tibetan monks had known for centuries – which ritual can be used as a strategy for overcoming pain and as a way of increasing endurance. “It’s using the mind as a way of coping with suffering,” he added. “Studies have shown that meditation can have strong positive health benefits, including lower blood pressure, reduced heart rate, and overall mental health.”Rossano said the community of survivors demonstrated a sense of self that was also known to man’s ancient ancestors. “It was a sense of self, not as a separate individual agent, but as someone embedded within a tight-knit community. It was a sense of self cultivated in the game of rugby and essential to the ultimate survival,” he said.“Mortal Rituals” is heavily footnoted, referencing numerous academic sources Rossano uses in his presentation. The book was published by Columbia University Press. ###Available online at www.southeastern.edu/news_media/news_releases Permalink to this article | 科技 |
2016-40/4016/en_head.json.gz/5750 | Stories By Richard Harris
Neutrinos May Not Travel Faster Than Light After All
February 23, 2012 Researchers in Italy say a bad connection between devices could explain a startling result they had last year, when they thought they'd witnessed particles traveling faster than the speed of light. Further tests await, but it appears there was a subtle problem with the equipment at the lab, as many physicists had expected. The laws of physics may not need to be rewritten after all.
From left, enginers Eric Nicosia, Amin Ahmadi and Gavin Boogs work to solve an issue with part of a wind turbine at the Gamesa Corp. factory in Langhorne, Pa., on Feb. 10.
Maggie Starbard/NPR
Many Jobs May Be Gone With The Wind Energy Credit
February 16, 2012 The wind power industry in this country has grown fast in recent years, but that could come to a screeching halt if Congress doesn't renew a tax credit that wind farms get for the power they produce. Tens of thousands of jobs now depend on the tax credit, as more wind turbine manufacturers have taken root in the U.S.
Everything looked fine on my CT scan, but I didn't sound even close to right.
Courtesy of Richard Harris/NPR
How My Voice Went Silent
February 10, 2012 After coming down with a mysterious headache and a blazing sore throat, NPR science correspondent Richard Harris lost his voice. And it didn't come back. Doctors eventually pinpointed the cause: a paralyzed vocal cord.
Drilling Team Finally Hits Antarctica's Liquid Lake
February 9, 2012 After years of trying, Russian scientists say they have drilled into an Antarctic lake that is buried beneath more than two miles of ice. They are looking for signs of life that haven't been exposed to sky in 20 million years.
This map shows what the Earth's landmass looked like in the Precambrian Era, about 738 million years ago. Chris Scotese/University of Texas at Arlington
Chris Scotese/University of Texas at Arlington
'Amasia': The Next Supercontinent?
February 8, 2012 More than 100 million years from now, the Americas and Asia might fuse together, squishing the Arctic Ocean shut in the process. That's according to a new model that predicts where the next supercontinent may form. But don't worry: Humans will likely be long gone by then.
Natural gas is much cleaner than coal. But some energy analysts say an overabundance of the fuel could depress development in even cleaner energy sources like wind and solar power. Above, a rig in Washington, Pa., drills into shale rock to extract natural gas.
Keith Srakocic/AP
Could Cheap Natural Gas Slow Growth Of Renewable Energy?
February 2, 2012 The relatively clean gas is replacing dirty coal-fired power plants. That's good news for the environment. But in the long run, cheap natural gas might delay the transition to even cleaner sources of energy, such as wind and solar power.
Could Cheap Gas Slow Growth Of Renewable Energy?
EPA Creates Website To ID Biggest Emitters Of Greenhouse Gases
January 11, 2012 Ever wondered who the big greenhouse-gas emitters are in your neck of the woods? The answer is now just a click away.
Delegates To Durban Agree To Climate Treaty
December 12, 2011 United Nations climate talks in Durban, South Africa, weren't expected to produce much. But negotiators did make a deal — one that could lead to a major new climate treaty at the end of the decade.
Delegates worked into the early hours of Sunday morning on the final day of the climate talks in Durban, South Africa.
Rajesh Jantilal/AFP/Getty Images
At Last, Nations Agree To Landmark Climate Deal
December 11, 2011 After a third sleepless night, climate negotiators in Durban, South Africa, finally found a way to reach a compromise early Sunday. The agreement charts a course for a legally binding climate pact that would include all the major emitters, including China, the United States and India.
U.S. envoy Todd Stern delivers a speech on Thursday in Durban, South Africa, during the U.N. Climate Change Conference. Stephane De Sakutin/AFP/Getty Images
Stephane De Sakutin/AFP/Getty Images
At Climate Talks, Frustration And Interruptions
December 8, 2011 Frustrated by what some see as U.S. foot-dragging on climate policy, an American college student interrupted U.S. envoy Todd Stern Thursday during his remarks at the climate conference in South Africa. Later, Stern emphasized that the U.S. has been working hard to advance global climate policy into the 21st century.
At Climate Talks, Resistance From India, China, U.S.
December 7, 2011 Fundamental disagreements among the nations attending the U.N. climate conference in Durban, South Africa, may stall a possible deal. There's still no consensus about the best way to move forward with an international agreement to reduce greenhouse gas emissions.
Climate-Treaty Talks Target U.S., China Emissions
December 6, 2011 Top delegates at the U.N. climate conference in Durban, South Africa, decide this week whether the Kyoto Protocol lives or dies. Tuesday morning, U.S. delegates have a one-on-one with China. The U.S. says it's open to discussing a future treaty but won't talk about anything legally binding until it knows what exactly would be in that agreement. China says it's open for talks but is vague beyond that. Climate-Treaty Talks Target U.S., China Emissions
Key provisions of the Kyoto Protocol expire in December of 2012, and experts say there's no real global framework in place to replace the treaty that was supposed to be the first step toward ambitious actions on climate change. Above, a coal-fired power plant in eastern China. China is now the leading carbon dioxide emitter in the world.
What Will Become Of The Kyoto Climate Treaty?
November 29, 2011 The 1997 treaty was supposed to be a first step toward more ambitious actions on climate change. But it's now on the brink of fading into irrelevance as unified, global actions on climate policy have been almost nonexistent.
The U.S. is second only to China in emitting gases that cause global warming. Above, the smoke stacks at American Electric Power's Mountaineer power plant in West Virginia.
Saul Loeb/AFP/Getty Images
Ahead Of Climate Talks, U.S. Leadership In Question
November 28, 2011 A presidential pledge to reduce emissions two years ago went nowhere in Congress. Today, the U.S. is spewing more carbon dioxide than ever into the atmosphere. Without meaningful U.S. action on emissions, a global pact seems unlikely to emerge from U.N. climate talks under way in Durban, South Africa.
A U.N. climate panel says that we can expect more extreme weather conditions as a result of climate change. Above, people run from a high wave on Nov. 8 in Nice, France, where heavy rain and flooding forced hundreds to evacuate. Vallery Hache/AFP/Getty Images
Vallery Hache/AFP/Getty Images
Climate Panel: More Extreme Weather On Way
November 18, 2011 Climate change will bring more heat waves, more intense rainfall and more expensive natural disasters, says a group of more than 200 scientists convened by the United Nations.
More from Richard Harris | 科技 |
2016-40/4016/en_head.json.gz/5809 | Viewpoint on 'IONS' Viewpoint on 'Scientific Literacy' Proudly sponsored by
Towards Filming Chemical Reactions
High-speed cameras can film a bullet passing through an apple. Chemical reactions, however, are far too fast even for the best cameras. Attosecond physics promises new approaches to study and manipulate them.
A Seriously Defocused Spider
A jumping spider about to pounce on its prey needs to be able to accurately measure its distance from its target in order to be able to hit it with precision. This is possible thanks to a visual approach previously unknown to exist in nature.
Flat Light from a Flat Diamond
The possibility to polarize light in optical fibers comes to establish graphene as a likely key player in the future of optical technologies; a new application emerges for this material that rocked the scientific world due to its fascinating properties.
<< Previous Story Volume 3 Next Story >> Volume 3 Story 3 - 27/10/2008
Make Two, Keep One
To generate single photons remains a challenge, to have them in a pure, well-defined quantum state even more so. Now this is possible, thus paving the way for future quantum technologies and applications.
Generation of a pure sigle photon. The laser pulse (blue) enters a non-linear crystal that generates two photons (red, middle). A polarizing beam splitter is then used to separate the two photons of which one is used as a trigger (left) to indicate that there is another photon (right) ready to be used.
Light is all around us in all possible colors, polarizations and intensities. But, despite the omnipresence of light in nature, it is difficult to generate a single photon, and it is even more difficult to get it in a pure quantum state. In the Clarendon Laboratory at the University of Oxford (UK), Peter J. Mosley and his collaborators have accomplished this. They have produced pure single photons using a nonlinear optical process. Since it is extremely difficult to obtain a single photon, the authors have opted for generating two but only holding on to one of them: make two, keep one.
Since the concept of photon was proposed over 100 years ago, it has come to revolutionize science and technology. A single photon is the tiniest bit of light. "A single photon," Mosley explains, "is a discrete packet of energy that exists as a single excitation of the electromagnetic field that makes up light." In most everyday situations, light is made up of such a vast amount of photons that a single one cannot easily be isolated. One could be led to believe that single photons could be isolated by sufficiently attenuating a light source. This approach works to a certain degree. What is obtained, however, is a statistical mixture of pulses of zero, one, two, or even more photons. Be that as it may, it would be possible to measure the number of photons present in a certain pulse; such measurement, however, would destroy the photons and make them unavailable for further use. Therefore, more sophisticated techniques are needed to build a so-called photon-gun, which generates single photons on demand.
But why would one be interested in single photons? and why in a pure quantum sate? Single photons in a pure quantum state may prove to be the building blocks for future quantum technologies. These technologies promise enormous improvements over their classical counterparts in terms of speed or security, for example. In quantum information processing, single photons may be used in quantum logic gates, playing the role of electrons in current electric circuits. "In a classical computer", Mosley points out, "information is stored as a series of bits — groups of electrons that represent either the value of 1 or 0. In optical quantum information processing these bits are replaced with quantum bits, called qubits, that can exist in what is known as a superposition state — a qubit can exist simultaneously in both the 1 state and the 0 state." And this possibility of superposition is at the very heart of computational speed-up: a quantum computer is a highly parallel machine, intrinsically so, which can sustain a large amount of computations taking place, on the same physical support, at the same time.
It is difficult to make single photons on demand; on the other hand, it is relatively easy to produce pairs of photons. The process of Spontaneous Parametric Down Conversion (SPDC) is one of the most convenient ways to produce pairs of photons. In SPDC a strong pump laser beam is shone onto a special kind of crystal. Occasionally a pump photon disappears and two new photons are created simultaneously. The production of pairs in the nonlinear crystal is probabilistic as in the case of an attenuated laser. Be that as it may, the fact that precisely two photons are produced simultaneously is an advantage. Therefore, the two photons can be taken apart and one of them can be used as a trigger (herald) to indicate the presence of the other, thereby telling us when a photon is ready to be used. These pairs of photons have special types of correlation in their color (frequency), in their spatial shape, and/or in their polarization. For many applications, such as in quantum cryptography, these types of correlation may be useful, but at the moment of generating pure single photons they are detrimental since they diminish the purity of the single photon. As long as the two photons are correlated, they are not independent of each other and, as a consequence, not in a pure, well-defined quantum state.
Mosley and his collaborators have managed to destroy the color correlation between the two new photons. They have produced heralded pure single photons. "This lack of correlation means that knowledge of the frequency of one photon gives no additional information as to the frequency of the other," Mosley explains. The spatial correlation was destroyed by generating photons with orthogonal polarization, while the color correlation was destroyed by carefully selecting the nonlinear crystal material and the color of the pump laser beam.
The purity of the heralded single photons is reflected in the lack of frequency correlation between pairs of photons. To demonstrate the purity of the single photons generated in Mosley's experiment, two identical single photon sources were used and the single photons were sent through a beamsplitter to observe the interference pattern. In technical terms, 4-photon coincidences were measured when the path of the interferometer was changed. Two of the four photons counted were used as triggers to signal when the heralded single photons were headed towards the interferometer. Mosley explains that "the visibility of the interference critically depends on the spectral correlation that exists in the photon pairs. What is more, the visibility of the interference pattern reveals the purity of the heralded single photons. In order to ensure that interference always occurs at the beamsplitter, the two heralded photons must be in pure states."
The demonstration of the generation of pure heralded single photons represents a step towards the implementation of quantum technologies and a way towards a deeper understanding of the foundations of physics. Warren P. Grice at the Center for Quantum Information Science of Oak Ridge National Laboratory (Tennessee) remarks that "the techniques laid out by Mosley and his coauthors' work overcome a number of fundamental stumbling blocks on the path toward the deployment of real quantum optics applications requiring multiple single photons. For the first time, they have demonstrated a heralded photon source in which the heralded photon is in a pure state." Grice is convinced that this is an important step towards the applicability of quantum states in technology and he has no doubt that soon "more hands on the table will bring more ideas and more applications." Xiaojuan Shi
2008 © Optics & Photonics Focus
XS is currently working on his doctoral thesis at ICFO - The Institute of Photonic Sciences, Barcelona (Spain).
Peter J. Mosley, Jeff S. Lundeen, Brian J. Smith, Piotr Wasylczyk, Alfred B. U�Ren, Christine Silberhorn, and Ian A. Walmsley, Heralded Generation of Ultrafast Single Photons in Pure Quantum States, Physical Review Letters (2008) 100, 133601 (link). << Previous Story Volume 3 Next Story >> About | 科技 |
2016-40/4016/en_head.json.gz/5837 | What is the Parapsychological Association?
Last Updated: Saturday, September 25, 2010
The Parapsychological Association, Inc. (PA) is the international professional organization of scientists and scholars engaged in the study of ‘psi’ (or ‘psychic’) experiences, such as telepathy, clairvoyance, remote viewing, psychokinesis, psychic healing, and precognition. Such experiences seem to challenge contemporary conceptions of human nature and of the physical world. They appear to involve the transfer of information and the influence of physical systems independently of time and space, via mechanisms we cannot currently explain. The primary objective of the Parapsychological Association is to achieve a scientific understanding of these experiences. In view of this, PA members develop and refine methodologies for studying psi and its physical, biological or psychological underpinnings. They assess hypotheses and theories through experiments, conceptual models and field investigations, and seek to integrate their findings with other scientific domains. PA members also explore the meaning and impact of psychic experiences in human society, and assess the possibility of practical applications and technologies. While covering a wide range of perspectives, the PA, as a whole, is committed to: Promoting scholarship and scientific inquiry into currently unexplained aspects of human experience Disseminating responsible information to the wider public and to the scientific community Integrating this information with knowledge from other disciplines Psi research: balancing openness and rigor Psi experiences have been reported throughout history, in all cultures. Even today, as multiple surveys show, a wide segment of the world’s population reports having had at least one experience that they believe to have been psychic. These experiences, and the phenomena associated with them, are the subject matter of parapsychology. PA members use well-developed scientific methods to determine to what extent psi phenomena can be explained through presently understood processes -- whether physical or psychological -- and to what extent they may point to unknown forces and laws, or necessitate a revised model of consciousness and its relationship to the world. Historically, science has made major advances in its understanding of the world through observation of ‘anomalies’– phenomena or data that did not fit into the concepts of the time. On the other hand, scientific and academic institutions are justifiably cautious about adopting radically new principles, and they tend to be quite conservative in accepting the reality of anomalous phenomena. The PA is dedicated to ensuring that legitimate caution does not equate to dismissal or active avoidance, thus merely propagating our ignorance. To preclude science from stagnating into dogma, it is vital that we improve our understanding of our world, of ourselves and our experience. If new principles of physics, biology or psychology do underlie psi experiences, then our current knowledge of human nature and the world around us is incomplete -- and it will remain so, until the scientific community makes a sustained effort to understand these experiences. An interdisciplinary matrix Most likely, psi phenomena involve complex interactions between various subjective, interpersonal and environmental factors. Accordingly, parapsychology is an interdisciplinary field, with specialists from the biological, physical, behavioral and social sciences. Approaches for investigating psi vary widely, including laboratory experimentation, field work, analytical studies, phenomenological approaches, case studies, surveys and historical research. PA members also engage in the construction of theoretical models and the development of new methodological and statistical tools. The diversity found within PA membership also leads to many different ‘schools of thought’ regarding the phenomena studied -- ranging from those who suspect that psi will eventually turn out to be an artifact of no major significance, to those who believe it will be accounted for through new developments in physics or biology, to those who argue that psi phenomena suggest a basis for spiritual beliefs. PA Conventions and Publications The PA provides an international forum for scholarly exchange through annual conferences, generally held in North America or in Europe, and through publication of the proceedings from these conferences. The PA has also sponsored special sessions for interdisciplinary scientific audiences, such as the American Association for the Advancement of Science. Electronic subscriptions to the PA's affiliated peer-reviewed publication, the Journal of Parapsychology, is provided to all members. The PA also publishes Mindfield: The Bulletin of the Parapsychological Association for its members. About the organization The PA was first established in 1957, and has been an affiliated organization of the American Association for the Advancement of Science (AAAS) since 1969. As of the year 2002, there are approximately 300 PA members from all over the world. The PA is a private, non-profit, 501(c)(3) tax-exempt organization, governed by Bylaws and with nine elected Directors. The PA is a non-adjudicating organization and endorses no ideologies or beliefs other than the value of rigorous scientific and scholarly inquiry.
Published In: About the PA
Tags: Parapsychological Association, PA | 科技 |
2016-40/4016/en_head.json.gz/5851 | IBM Breakthrough Provides a Closer Look at Atoms
IBM researchers have come up with a technique to view, record and study the behavior of atoms in real time, which could have a long-term impact on the way nanoscale chips and devices are built.Recording atomic behavior in real time was not previously possible, and the breakthrough could help scientists get a better understanding of smaller structures and processing activity at an atomic scale, said Andreas Heinrich, a scientist and researcher at IBM Research. For example, the breakthrough could help scientists understand how long atoms can hold information -- or bits -- which could pave the way to build smaller devices. Processing at the atomic level happens in a matter of nanoseconds, and by understanding the atomic behavior over a time period, scientists could more effectively build nanoscale structures or devices for applications like storage and solar energy, Heinrich said. For solar energy, the breakthrough will help scientists view in real time the energy conversion of photons to electrons. Scientists will also be able to understand the electronic and magnetic activity of atoms, which could help them pursue smaller storage devices and structures with nanoscale components, Heinrich said."If you can't see things happen, then you have to infer this from unscientific measurements," Heinrich said. Microscopic objects could be measured in the past, but the new information helps scientists understand how objects dynamically change over relatively short time periods, said Michael Crommie, a professor of physics at the University of California, Berkeley, in a YouTube video describing the technology. "It gives us new handles for getting materials to behave the way we want whether that involves absorbing light or separating charge," Crommie said.At the heart of the new technique are improvements in the scanning tunneling microscope (STM), which is like a high-speed camera that can record the behavior of atoms on a nanosecond scale. Magnetic atoms are hit with voltage pulses at specific intervals, and the microscope is able to record the events frame by frame. Some components were replaced in the STM to make the frame-by-frame recording possible. IBM has had the STM for 20 years, and the earlier components were not capable of recording events in real time."Since all modifications are external to the actual microscope, we believe that this technique will be widely employed by our research colleagues around the globe," Heinrich said. The opportunity could change quantum computing, which consists of systems capable of performing massively parallel computation, Heinrich said. At the heart of a quantum computer are quantum bits (qubits), which interact with each other following the laws of quantum mechanics. Those laws apply to the interaction and behavior of matter on atomic and subatomic -- proton, neutron and electron -- levels. "IBM envisions using individual magnetic atoms on surfaces for this task -- using the electron spin of atoms as qubits," Heinrich said. Atoms are key players in quantum mechanics and in this approach, the STM will be used to position the atoms precisely on a custom-tailored surface. External magnetic fields will be employed to perform the necessary single qubit and multi-qubit operations. The STM will then be used to read out the state of the qubits at the end of the computation. It will be possible to read the state of such an atom on the surface fast enough, Heinrich said.Quantum computing has been researched for decades, but many problems have popped up around keeping data in a coherent format, making it difficult to run programs or computing tasks. Heinrich said that many key elements still need to be developed to make the technique applicable to quantum computing."The inherent advantage of this particular implementation of a quantum computer is the realization that if we can build and control one qubit, the step to controlling many qubits is rather small -- in stark contrast to most other schemes for quantum computation," Heinrich said.IBM wants to push the limits of engineering, and the breakthrough is fundamental in understanding atoms, IBM's Heinrich said. The company wants to see what happens when a few atoms are put together in small structures."The IT industry has been shrinking down components ... but our worry is to jump to the scale of what will ultimately happen," Heinrich said, referring to IBM's desire to lead in the race to nanoscale computing.
IBM's Molecular Images May Help Nanoscale Circuits
Email "IBM Breakthrough Provides a Closer..." | 科技 |
2016-40/4016/en_head.json.gz/5852 | Net Work
Practical IT insight from Tony Bradley
GoDaddy teams with Microsoft to offer Office 365 to small businesses
Tony Bradley
| @techspective
Microsoft and GoDaddy have partnered to offer Office 365 as the exclusive business-class email and productivity tools service for GoDaddy’s small business customers. GoDaddy is offering customers three different tiers of Office 365 service: Email Essentials, Business Essentials, and Productivity Plus. The Email Essentials plan is only $4 per month, and is tailored to very small business customers. It lets the business set up an email using the company’s own domain; comes with 5GB of email storage, plus 2GB of SkyDrive Pro cloud storage; and lets customers sync email, calendar, and contacts across devices and platforms. The Business Essentials plan is closer to the standard Office 365 offerings. For $9 per month, it includes 50GB of email storage, and 25GB of SkyDrive Pro cloud storage. It also adds online HD video conferencing, file sharing and collaboration, and online access to the core Office Web Apps.
For $12.50 per month, Productivity Plus customers get to install the full Microsoft Office desktop suite on up to 5 PCs or Macs, and gain access to use Office Mobile Apps for Windows Phone, iPhone, and Android.
All of the tiers of service offered by GoDaddy also include the main benefits of Office 365 for small business customers. First, the infrastructure and applications are maintained by Microsoft, so it’s like getting a whole IT department thrown in for free. Second, the tools included in the Office 365 service will always be updated, and customers will have access to whatever is the most current version of the software available.
There are nearly 30 million small businesses in the United States, and an estimated 92 percent of them have fewer than four employees. Businesses that small face a challenge because they still need business-class tools and services, but lack the resources to implement and manage those tools on their own. A Boston Consulting Group study commissioned by Microsoft found that if more small and medium businesses had access to cutting edge IT tools, it could boost revenues by a combined $770 billion and create an estimated 6 million more jobs. If your small business has a domain bought from or hosted by GoDaddy, you should take a look at the new Office 365 offerings and see if they make sense for you. Related:
Tony Bradley Tony is principal analyst with the Bradley Strategy Group, providing analysis and insight on tech trends. He is a prolific writer on a range of technology topics, has authored a number of books, and is a frequent speaker at industry events.More by Tony Bradley
Microsoft Office 365 Beta Opens for Business: What's Inside
Email "GoDaddy teams with Microsoft to..." | 科技 |
2016-40/4016/en_head.json.gz/5881 | | Plant Engineering
Wireless alliance for 60 GHz technology
More than 15 technology companies announced the Wireless Gigabit (WiGig) Alliance, an organization formed to establish a unified specification for 60 Gigahertz (GHz) wireless technologies. The widespread availability and use of digital multimedia content has created an ever-increasing need for faster wireless connectivity that current wireless standards cannot support.
By Control Engineering Staff
This has fueled demand for a single technology that can support instantaneous file transfers, wireless display and docking, and streaming high definition media on a variety of devices. To meet this demand, the WiGig Alliance is developing a 60 GHz wireless technology that provides the optimal way to connect electronics, handheld devices and personal computers.
The WiGig specification will allow devices to communicate without wires at gigabit speeds within a typical room; the group’s vision is to create a global ecosystem of interoperable products based on this specification, which will unify the next generation of devices at speeds more than 10 times faster than today’s wireless LANs.
Among the companies on the board of directors are: Atheros Communications, Inc., Broadcom Corporation, Dell Inc., Intel Corporation, LG Electronics Inc., Marvell International Ltd., MediaTek Inc., Microsoft Corp., NEC Corporation, Nokia Corporation, Panasonic Corporation, Samsung Electronics Co., and Wilocity.
“We’re now at the point where the last barrier to wireless being able to do everything that wire can has fallen,” said Craig Mathias, a principal with the wireless and mobile advisory firm, Farpoint Group.
“In both the residence and the enterprise, more capacity and throughput are always desirable. WiGig Alliance is going to deliver technology that will have an enormous impact on connectivity and mobility, information technology, consumer electronics, and many other applications,” says Mathias.
www.WirelessGigabitAlliance.org/specifications www.controleng.com <- Back to: Home | 科技 |
2016-40/4016/en_head.json.gz/5890 | Augmented reality: a long way off?
Dan Sung3 March 2011Phones
Professor Steve Feiner teaches a class on AR each spring at Columbia University, New York. Every year, at the beginning of the course - a course which has the words “augmented reality” in the title, he asks his students how many of them have actually heard of the phrase before. In 2008, there were maybe two or three hands in the air out of a total of 15. Last spring, the number went up to almost half - proof, Pocket-lint suggests, that AR’s moment has finally arrived. Not so, says the augmented reality guru down the line from his desk in New York City.
“People are getting excited about handheld AR but we were excited years ago,” says the man who practically invented the discipline along with the likes of Blair MacIntyre, his colleague on the KARMA project in 1991 - one of the first solid examples of augmented reality in action.
“If you asked people on the street who own a smartphone about Layar and Wikitude, they’d look at you like you were crazy,” he giggles down the phone as we half wonder whether that’s exactly what Feiner’s experiment of the morning will be. He softens.
“Well, it’s not the watershed moment for AR right now but it could be a watershed moment. Have you ever heard of VisiCalc?”
VisiCalc, as it turns out, was the original “killer app”. It was a spreadsheet application for the computers, the one which made people finally realise that these machines were worth owning.
“VisiCalc came out in the 70s,” he continues, “but it still wasn’t until well into the 90s when everyone had a computer in their own home.”
It’s not that Feiner has no faith in augmented reality and its development but, like many of the top experts in the field that we’ve spoken to in AR Week, he does harbour a certain level of distaste for the wave it around and look at the dots version of apps currently available.
“The model of smartphone AR is not good enough at the moment,” he explains. “You have to switch it on, go to an app, wait for the GPS to kick in, hold it up at arms length, look stupid, get tired and, when it finally kicks in, the camera’s field of view is not right anyway. Those adverts you see for Wiktude and Layar with what’s on the screen perfectly lining up like a frame around the real world just don’t tell the story.”
What he’s referring to is that the camera of a smartphone is just that. It’s a camera, for taking pictures, not for providing a 50mm type, 1:1 scale relation with the world. If they were built that way, it’d be too hard to fit the things in that you needed. One of the side effects of this in the use of AR on the hoof, is that it brings a certain degree of distortion to your surroundings when viewed through your phone, something which adds yet another problem to the large pile of issues around getting the real and the virtual to line up on demand and in real time. Just in case we weren’t depressed enough, Feiner chucks on a few more.
“The GPS system isn’t good enough either. You can get better equipment with backpacks that surveyors use. It brings the accuracy down to a few cm rather 5m or so and that software is freely available. It’s just that it costs in excess of $10,000.”
Not something we'll be finding in the next Samsung Bada then.
“We could also use better gyros, compasses and accelerometers too but the likes of Qualcomm and Nokia have labs working on and targeting AR all over the world and they’ve realised this. So, what they’re actually doing is making their chip technology more tailored for these features to better support what’s already there.”
Thankfully, Feiner’s outlook is much cheerier than his reservations might suggest. Within the iron clad practicality of this scientific researcher beats the heart of a man with a dream.
“I honesty believe that at some point in the future we’re going to have AR eyewear that’s sufficiently light weight, comfortable, visually appealing, high quality enough and at the right price that people will want to wear while walking around. It has to be socially acceptable and desirable.”
While it may sound unlikely to those with 20-20 vision or people who’ve chosen to wear contact lenses instead of spectacle frames to correct their myopia or hyperopia, as Feiner points out, it has become the norm to wear little bits of plastic in our ears - for some they’re even a badge of status. However, he does appreciate that there’s a difference.
“We need to be very careful with eyewear, though. It’s a much higher bar to pass when we’re dealing with someone's face. It’s the first place people look. We need to get around the whole Borg 'we will assimilate you’ look. We need to have good industrial design to make the glasses appealing and comfortable enough, and still be able to see into people’s eyes, but industrial designers are very good and I’m confident that they’ll find a way of doing it.”
As Feiner points out, there are a certain number of prerequisites for such AR glasses. They need to wirelessly connect to the user’s mobile or at least have tiny CPUs and GPUs of their own and not “instant on” but “always on”, but with an easy way of turning them off when you want a break.
"The interesting problem is the software."
As the professor points out, it’s all very well having the kit and the infrastructure in place but making it usable, understandable, helpful and non-intrusive is a whole other task. Aside the ongoing work with the US Marines for the ARMAR (Augmented Reality for Maintenance and Repair) project, software design and development is the primary focus of the work of Feiner and his students at Columbia when it comes to AR.
“It has to be a low level interface,” he stresses. “We don’t want people to get run over while totally immersed in the sky or the trees or something else.”
The key is that while we’re normally undertaking a number of jobs at any one time, there’s always a prime task to which we’re attending and the research is about how to narrow what’s out there down to just the information that we require for what’s in hand. One technique is simply to filter out everything else that’s going on but another is to keep most of the input there, but only highlighting that which is germane to the moment.
Rather like how some calendars work, the UI can focus on the primary task and make the information displayed for that purpose much more obvious than anything else. The secondary, tertiary and next level data can still be there but slightly smaller or not highlighted in the same way, and the AR below that can either still be present or not at all.
“It’s what we’re referring to as a fish-eye view and proximity is one way to do it. The idea is that things physically closer to the user are the ones that are more important so the information for these objects are the ones the system would prioritise.
“We can even work with haptic, visual and sonic cues as well to bring more of the user’s senses into play, but what we really need for it all to work is a system that shows us the world by sensing our responses to it, as well as an element of us telling it what we want it to do. That’s the heart of what it needs before we can have good, usable AR.”
If Feiner and the department at Columbia didn’t already have their work cut our for them, there’s also plenty of issues if you want to start rendering the augmentations on these glasses in 3D, as will doubtless become important when augmentations have to be objects rather than just text. So given that it’s going to be tricky enough even to get good AR spectacles going, how about the nirvana of the augmented reality contact lenses?
“Contact lenses? Yep, it will happen. It’s very delicate and there’s lots of problems including radiation to your eyeball and how people are going to be able to see a sharp image so close to their retina rather than a wash of colour. It’s hard but it will be doable.
“But my question to you is this - why have it washing around on the surface of your eye when you can have it implanted inside your head? Sure there are social and ethical issues but these things will change with each generation as it becomes more acceptable.
“And then if you can have it implanted in your head as an adult, then why not have it done at birth? And if that can happen, then why not into our genes? And at that point, we would have changed the human species altogether but beyond that, I’ve no idea where we’ll go.”
For more information on what Qualcomm is doing with Augmented Reality please click: http://www.qualcomm.co.uk/products/augmented-reality
And for more on AR Week head over to our AR Week homepage on Pocket-lint
AR Week
Steve Feiner | 科技 |
2016-40/4016/en_head.json.gz/5932 | Obama to Unveil Initiative to Map the Human Brain
President Obama on Tuesday will announce a broad new research initiative, starting with $100 million in 2014, to invent and refine new technologies to understand the human brain, senior administration officials said Monday.
“The underlying assumptions about ‘mapping the entire brain’ are very controversial,” said Donald Stein, a neuroscientist at Emory University in Atlanta. He said changes in brain chemistry were “not likely to be able to be imaged by the current technologies that these people are proposing.”
Emphasizing the development of technologies first, he said, is not a good approach. “I think the monies could be better spent by first figuring out what needs to be measured and then figuring out the most appropriate means to measure them.” he said. “In my mind, the technology ought to follow the concepts rather than the other way around.”
However, supporters of the initiative argued that it could have a similar impact as the Sputnik satellite had in the 1950s, when the United States started a significant nationwide effort to invest in science and technology.
Read the whole story: The New York Times
Published April 2, 2013 Comments
Robots for Research
From Observer Print Issue Childlike robots are increasingly at the center of a unique form of integrative science, bringing together experts in robotics, neuroscience, linguistics, and child development.
Experience Buying and Selling Reduces Financially Costly Biases
From Minds for Business When participants practiced trading items on eBay, they became less prone to certain economic biases and showed changes in brain areas linked to loss aversion.
National Science Foundation Announces Opportunity to Develop National Infrastructure for Neuroscience Research
From Announcements The National Science Foundation (NSF) announces its Next Generation Networks for Neuroscience (NeuroNex) program and opens the call for submissions for two types of projects: neurotechnology hubs and theory teams. […] About | 科技 |
2016-40/4016/en_head.json.gz/5978 | Exploring Echinacea’s Enigmatic Origins
An Agricultural Research Service (ARS) scientist is helping to sort through the jumbled genetics of Echinacea, the coneflower known for its blossoms"”and its potential for treating infections, inflammation, and other human ailments.
Only a few Echinacea species are currently cultivated as botanical remedies, and plant breeders would like to know whether other types also possess commercially useful traits. ARS horticulturist Mark Widrlechner, who works at the ARS North Central Regional Plant Introduction Station (NCRPIS) in Ames, Iowa, is partnering in research to find out how many distinct Echinacea species exist. Previous studies have put the number between four and nine species, depending on classification criteria.
Working with Iowa State University scientists, Widrlechner selected 40 diverse Echinacea populations for DNA analysis from the many populations conserved at the NCRPIS. Most of these Echinacea populations were found to have a remarkable range of genetic diversity.
DNA analysis suggested that when much of North America was covered with glaciers, Echinacea found southern refuges on both sides of the Mississippi River. But when the glaciers receded after thousands of years, the groups came together as they moved northward and began to hybridize, which might have blurred previous genetic distinctions.
The research team also analyzed the same populations for chemical differences in root metabolites. These metabolites, which are often essential for survival and propagation, can vary widely among species and may have benefits for human health.
Using this approach, researchers were able to identify clear distinctions among all 40 populations. These distinctions were organized into three composite profiles that accounted for almost 95 percent of the metabolite variation among the populations.
Additional analysis of metabolite variation indicated that the populations grouped together in ways that aligned well with earlier Echinacea species assignments that were based on plant morphology. This work suggested that there were nine distinct species, not just four.
Results from this work were published in Planta Medica.
By Ann Perry, ARS
Read more about this research in the March 2010 issue of Agricultural Research magazine
Image Courtesy Bruce Marlin - Wikipedia | 科技 |
2016-40/4016/en_head.json.gz/5979 | Genetic Research Confirms Non-Africans Are Part Neanderthal
Some of the human X chromosome originates from Neanderthals and is found exclusively in people outside Africa, according to an international team of researchers led by Damian Labuda of the Department of Pediatrics at the University of Montreal and the CHU Sainte-Justine Research Center. The research was published in the July issue of Molecular Biology and Evolution.
"This confirms recent findings suggesting that the two populations interbred," says Dr. Labuda. His team places the timing of such intimate contacts and/or family ties early on, probably at the crossroads of the Middle East.
Neanderthals, whose ancestors left Africa about 400,000 to 800,000 years ago, evolved in what is now mainly France, Spain, Germany and Russia, and are thought to have lived until about 30,000 years ago. Meanwhile, early modern humans left Africa about 80,000 to 50,000 years ago. The question on everyone's mind has always been whether the physically stronger Neanderthals, who possessed the gene for language and may have played the flute, were a separate species or could have interbred with modern humans. The answer is yes, the two lived in close association.
"In addition, because our methods were totally independent of Neanderthal material, we can also conclude that previous results were not influenced by contaminating artifacts," adds Dr. Labuda.
Dr. Labuda and his team almost a decade ago had identified a piece of DNA (called a haplotype) in the human X chromosome that seemed different and whose origins they questioned. When the Neanderthal genome was sequenced in 2010, they quickly compared 6000 chromosomes from all parts of the world to the Neanderthal haplotype. The Neanderthal sequence was present in peoples across all continents, except for sub-Saharan Africa, and including Australia.
"There is little doubt that this haplotype is present because of mating with our ancestors and Neanderthals. This is a very nice result, and further analysis may help determine more details," says Dr. Nick Patterson, of the Broad Institute of MIT and Harvard University, a major researcher in human ancestry who was not involved in this study.
"Dr. Labuda and his colleagues were the first to identify a genetic variation in non-Africans that was likely to have come from an archaic population. This was done entirely without the Neanderthal genome sequence, but in light of the Neanderthal sequence, it is now clear that they were absolutely right!" adds Dr. David Reich, a Harvard Medical School geneticist, one of the principal researchers in the Neanderthal genome project.
So, speculates Dr. Labuda, did these exchanges contribute to our success across the world? "Variability is very important for long-term survival of a species," says Dr. Labuda. "Every addition to the genome can be enriching." An interesting match, indeed.
About the study: "An X-linked haplotype of the Neandertal origin is present among all non-African populations" was published in the July 2011 issue of Molecular Biology and Evolution. The authors are Vania Yotova, Jean-Francois Lefebvre, Claudia Moreau, Elias Gbeha, Kristine Hovhannesyan, Stephane Bourgeois, Sandra Beôdarida, Luisa Azevedo, Antonio Amorim, Tamara Sarkisian, Patrice Hodonou Avogbe, Nicodeme Chabi, Mamoudou Hama Dicko, Emile Sabiba Kou' Santa Amouzou, Ambaliou Sanni, June Roberts-Thomson, Barry Boettcher, Rodney J. Scott, and Damian Labuda.
The study was supported by grants from the Canadian Institutes of Health Research. | 科技 |
2016-40/4016/en_head.json.gz/6053 | Share A river runs backward. Erosion and other processes taking place at Earth’s surface help explain why large portions of the Amazon River (watershed depicted in lighter colors) reversed course.
Image: Jesse Allen/NASA (using SRTM data courtesy of Global Land Cover Facility/U. MD); River data: WWF HydroSHEDS Project
Why the Amazon flows backward
By Sid PerkinsJul. 15, 2014 , 3:45 PM
Millions of years ago, rivers flowing westward across what is now northern Brazil reversed their course to flow toward the Atlantic, and the mighty Amazon was born. A previous study suggested that the about-face was triggered by gradual changes in the flow of hot, viscous rock deep beneath the South American continent. But new computer models hint that the U-turn resulted from more familiar geological processes taking place at Earth’s surface—in particular, the persistent erosion, movement, and deposition of sediment wearing away from the growing Andes.
The Andes mountains lie just inland of the western coast of South America. The central portion of that mountain range began growing about 65 million years ago, and the northern Andes started rising a few million years later, says Victor Sacek, a geophysicist at the University of São Paulo in Brazil. Yet, field studies suggest that the Amazon River, which today carries sediment-laden water from the Andes across the continent to the Atlantic Ocean, didn’t exist in its current form until about 10 million years ago. Before then, rainfall across much of what is now the Amazon Basin drained westward into massive lakes that formed along the eastern rim of the Andes and then flowed north via rivers into the Caribbean. The geological processes that caused ancient drainage patterns to shift to their modern configurations have been hotly debated.
The lakes east of the Andes formed in a long trough created when the immense weight of that growing mountain chain pressed Earth’s crust downward, Sacek says. But for some reason, the terrain beneath the trough slowly gained elevation over millions of years, and those lakes gradually gave way to a long-lived region of wetlands covering an area the size of Egypt or larger. Later, after the landscape rose even farther, the wetlands disappeared altogether. Previously, scientists proposed that changes in the circulation of molten material in Earth’s mantle—the slow-flowing material that lies between our planet’s core and its crust—pushed the terrain east of the Andes upward, thereby changing drainage patterns.
But new research pins the blame on something more mundane: erosion. Sacek developed a computer model that includes interactions between growth of the Andes, the flexing of Earth’s crust in the region, and climate. (For instance, as the mountains rise, they intercept more moist airflow and receive more rainfall, which in turn boosts the rate of erosion.) The model simulates the evolution of South American terrain during the past 40 million years—a period that commenced after the birth of the central Andes but before the eastern flank of those mountains began to rise, Sacek notes.
Results of the simulation reproduce much of the evidence seen in the geological record, Sacek reports online ahead of print in Earth and Planetary Science Letters. Initially, the lakes form east of the Andes because the mountains press Earth’s crust downward to form a trough faster than sediment can fill it. Then the sinking of the terrain slows down, and accumulation of sediment spilling off the Andes catches up, gradually filling in the lakes and building the landscape higher. Eventually, the terrain just east of the mountain chain becomes higher than that in the eastern realm of the Amazon Basin, a shift that provides a downhill slope extending all the way from the Andes to the Atlantic beginning about 10 million years ago.
“Erosion and sedimentation are powerful forces,” says Jean Braun, a geophysicist at Joseph Fourier University in Grenoble, France. Sacek’s model shows that these processes explain the geological record seen in northern South America, “and they do so with the right timing,” he adds. They also suggest that the amount of sediment carried to the mouth of the Amazon each year and then dumped offshore should increase over time—something actually seen in sediment cores drilled from that area. “That’s a nice bit of prediction by the model,” Braun says.
The gradually increasing rate of sediment accumulation possibly stems from the long time needed for the material to hopscotch its way across the continent, being dumped in one spot and then remobilized by erosion later, says Carina Hoorn, a geologist at the University of Amsterdam. Or, she suggests, the increase may stem from a geologically recent boost in erosion in the Andes triggered by a series of ice ages that commenced about 2.4 million years ago.
One thing Sacek’s model doesn’t do a good job of predicting, he admits, is the size, shape, and persistence of the large area of wetlands that formed in what is now the central Amazon Basin between 10.5 million and 16 million years ago. But it’s possible, he notes, that changes in mantle circulation beneath the region did play a minor role in the evolution of the terrain. Sacek will try to incorporate such processes into future versions of his terrain simulation, to see if they better explain how the landscape evolved.
Such changes in mantle flow are “difficult to quantify and even more difficult to discern [in the real world],” Braun says. But by combining the modest effects of such changes with those triggered by surface processes such as erosion, “you might end up with something that works.”
Posted in: PhysicsEarth
Sid Perkins
Sid is a freelance science journalist.
Email Sid Twitter More from News Will Nobel Prize overlook LIGO's master builder?
The Subduction Zone Observatory takes shape
Paired stars sculpt nebulae into fanciful shapes | 科技 |
2016-40/4016/en_head.json.gz/6124 | FTC: T-Mobile made millions in bogus charges
By ANNE FLAHERTY The Associated Press First Published Jul 01 2014 03:37PM
• Last Updated Jul 01 2014 03:37 pm Share This Article
Washington • T-Mobile USA knowingly made hundreds of millions of dollars off its customers in potentially bogus charges, a federal regulator alleged Tuesday in a complaint likely to mar the reputation of a household name in wireless communications.
In its complaint filed in a federal court in Seattle, the Federal Trade Commission claimed that T-Mobile billed consumers for subscriptions to premium text services such as $10-per-month horoscopes that were never authorized by the account holder. The FTC alleges that T-Mobile collected as much as 40 percent of the charges, even after being alerted by other customers that the subscriptions were scams.
"It's wrong for a company like T-Mobile to profit from scams against its customers when there were clear warning signs the charges it was imposing were fraudulent," said FTC Chair Edith Ramirez in a statement. "The FTC's goal is to ensure that T-Mobile repays all its customers for these crammed charges."
The Federal Communications Commission has launched a separate inquiry into T-Mobile's billing practices, which could result in fines if it finds any wrongdoing.
The practice is often referred to as "cramming": businesses stuff a customer's bill with bogus charges associated with a third party. In this case, the FTC says T-Mobile should have realized that many of these premium text services were scams because of the high rate of customer complaints. In some cases, the FTC says, as many as 40 percent of customers demanded refunds in a single month on certain services.
The FTC said one way for consumers to try to prevent fraudulent charges is to ask their providers to block all third-party businesses from providing services on their phones.
T-Mobile did not immediately respond to a request for comment. Headquartered in Bellevue, Washington, T-Mobile US, Inc., is a publicly traded company. According to its website, Deutsche Telekom AG maintains a 67 percent ownership in the company's common stock.
Sprint Corp., the third-largest cellphone carrier, is in talks to buy T-Mobile US Inc., according to published reports. Analysts believe such a link-up would face stiff opposition from the same regulators who blocked AT&T from buying T-Mobile in 2011.
MIDNIGHT SPECIAL: BYU beats Toledo 55-53 on field goal on final play (with video) Kragthorpe: Wild finish gives BYU much-needed victory (with video) Western attorneys general cast doubt on Utah's bid to control federal lands Utah football: Cal's offense will be tempo test for No. 18 Utah Russia fighting in Syria for a year, still at odds with US ELEVATE | 科技 |
2016-40/4016/en_head.json.gz/6225 | EA announces first Sims 3 expansion
on August 3, 2009, 4:07 PM
Just a few short months after EA released the Sims 3, they are planning to offer the world an expansion to the game. Initially to be released for Mac and Windows platforms, the forthcoming World Adventures pack will add travel sites around the globe, exploration and treasuring hunting and numerous other bonuses to the game. It should be in stores by mid-November, with mobile versions due out next year.
The game has been available for only two months now, but opened up to break sales records. Over 1.4 million copies were sold during the first week, and it has continued to enjoy success past launch. Whether or not it's too soon for an expansion, the fans will decide. Love or hate the Sims, you probably still have an opinion on how EA is handling one of their most successful franchises ever. What's your take is EA blatantly milking The Sims, or will they actually produce enough new and interesting content to justify an expansion?
Get The Sims 2 for free on Origin this week The Sims 4's anti-piracy systems take pixelation to the extreme Now Read This…
Put Hexen, Doom and Quake in a blender and you get Dusk
'Duke Nukem 3D: 20th Anniversary World Tour' hits consoles and PC next month
What do you think of No Man's Sky? | 科技 |
2016-40/4016/en_head.json.gz/6255 | More Evidence Suggests Honeybees Are Dying en Masse Because of Pesticides
Honeybees exposed to a certain class of insecticide are more likely to die from Colony Collapse Disorder (CCD) the name given to whatever is causing the mass decline in bees over the past several years, according to a new study.
This article is from the archive of our partner .Honeybees exposed to a certain class of insecticide are more likely to die from Colony Collapse Disorder (CCD), the name given to whatever is causing a mass decline in the bee population over the past six years, according to a new study.The report, which appears today in the Bulletin of Insectology, recreates a 2012 study which first linked the bee-killing disease with neonicotinoids. The same team of researchers from the Harvard School of Public Health who conducted the 2012 study ran this later one, and their findings bolster their earlier findings. According to lead author Chensheng (Alex) Lu, "We demonstrated again in this study that neonicotinoids are highly likely to be responsible for triggering CCD in honey bee hives that were healthy prior to the arrival of winter." To perform the latest study, the researchers examined 18 bee colonies in three different locations in central Massachusetts. They split each colony into three groups — one treated with a neonicotinoid called imidacloprid, one with a neonicotinoid called clothianidin, and one left in pristine condition to serve as a control group. The scientists monitored the groups from October 2012 to April 2013 and found that, by the end of that period, half of the neonicotinoid colonies had been decimated, while only one of the control colonies was destroyed by a common intestinal parasite, Nosema cerenae. None of the bees were affected until winter, the authors write: We found honey bee colonies in both control and neonicotinoid-treated groups progressed almost identically, and observed no acute morbidity or mortality in either group until the arrival of winter... As temperatures began to decrease in late October 2012, we observed a steady decrease of bee cluster size in both control and neonicotinoid-treated hives continued to decline.
The new study also offered some novel information. When the team conducted the research in 2012, a whopping 94 percent of infested colonies died off. The discrepancy suggests that the harsh winter of the period studied (2010-2011) likely exacerbated the effects of the insecticides. Further, the study put to rest the notion that parasites are contributing to CCD. Discover explains: When CCD first emerged in honeybee colonies in the mid 2000s, N. ceranae was put forward as a possible cause. Subsequent research in Europe, however, has suggested N. ceranae was widespread in many areas before CCD and is not associated with the phenomenon. Although other studies have suggested that pesticides, particularly neonicotinoids, cause bees to become more susceptible to mites or other parasites that then kill off the bees, today’s study found that bees in the CCD hives had the same levels of parasite infestation as the control colonies. The loss of honeybees is concerning because they pollinate roughly one-third of all crops, globally, and up to 80 percent of U.S. crops. And this study is just a jumping off point. Says Lu, "future research could help elucidate the biological mechanism that is responsible for linking sub-lethal neonicotinoid exposures to CCD." Here's hoping. This article is from the archive of our partner The Wire.
previousWhat It's Like to Work at the Walmart Obama Is Visiting TodaynextBut What Would the End of Humanity Mean for Me? | 科技 |
2016-40/4016/en_head.json.gz/6311 | Page-1 Entertain Politics Sports Tech Science Business World Product Review: Page (1) of 1 - 03/04/09
Nextar Q4 GPS unit
By John Virata
A few years ago when Global Positioning Systems were all the rage, the Magellans and the Garmins dominated the market. Today, you can find GPS systems with Navteq maps virtually everywhere at crazy cheap price points. One such GPS unit is the Nextar Q4. Powered by Navteq maps, the Q4 features a 4.3-inch touch screen color display and Text to Speech Navigation with turn by turn voice prompting. The device, which retails for $299 but can be had for less than $175, is built by Nextar, a company that offers everything from digital photo frames, to water proof Bluetooth speakers, to MP3 players. The Nextar Q4 is one of many models in its line. In addition to the nagivation capabilities, the Q4 also plays MP3 audio files, sports an address book with support for 300 addresses, and has the capability to display photographs as well. NavigatingThe Q4 does well enough feature wise in the navigation department. It offers 2D or 3D map viewing modes, day and night mode and an easy enough to use touch screen display. It isn't too cramped, and gets the job done for the most part. Maps are stored on a 2GB SD card, as is the device's Points of Interest database, of which the Q4 supports 1.6 million, giving you a fair amount to choose from if you are looking for say a Vietnamese restaurant in Little Saigon. Nextar Q4 While the turn by turn directions will get you to your destination, there are a few annoyances such as the automatic route recalculation feature. There were instances when I deviated from the set destination, and the GPS wouldn't be able to quickly figure out what just occurred, telling me to turn left, then right for no apparent reason. Ignoring those requests, and driving down a road parallel to the one the GPS suggests, it kept trying to get me to turn left to get back on its intended road. Before every major street came, up, it would prompt me to prepare to turn left. Other systems I've used would just recalculate and take into consideration that I wasn't ready to turn left. In other instances, the GPS would tell me to follow the road "for a while," which I thought was a funny statement. It does feature the standard route choices though. You can calculate your route for the fastest route, the shortest, avoid freeways, and avoid tollways. You can always reset the route calculation settings when you want to get there a different way. First ImpressionsThe Q4 is an ideal GPS for the first time user. It has a 4.3-inch screen, which isn't too small nor too big, enabling users in California to more easily comply with the new 2009 mount requirements as dictated by the State of California, enabling you to mount it in the left bottom corner of your car's windshield without the hassles of a bigger more unwieldy unit. To mount the device in the right corner of the windshield, (an alternative location), would be more of a hassle given the size of the display. Users in other states won't have this issue, so the Q4 would do well right where most people mount their units, right in the lower center of the windshield or on the dash. The Q4 has enough points of interest to keep things interesting at 1.6 million. Who needs 6 million POI's anyway? It comes with a one year warranty, which is nice, as well as a whole bunch of accesories, including a soft protective case, USB cable, car charger and AC adapter for charging the unit in the home or office, and the windshield mounting device, which works excellent and can hold the unit with no loss of suction. I really enjoy the touch screen of this unit as well as the bright display. There is no Bluetooth capabilities, but Nextar has other models at slightly higher price points that offer Bluetooth. It offers enough to keep you from getting lost, and is inexpensive compared to the name brand GPS units on the market. For more information, visit www.nextar.com
John Virata is senior editor of Digital Media Online. You can email him at [email protected] | 科技 |
2016-40/4016/en_head.json.gz/6323 | Unearthly Powers
Wendy Lesser
The End of Eternity
by Isaac Asimov.
Tor/Tom Doherty Associates, 2010,
$24.99 cloth.
The world we inhabit is one in which weekly newsmagazines, printed on paper in columns of type, are considered primitive and profoundly obsolescent; in which an entire bookshelf of bound volumes can be stored in a gadget the size of a fingertip; in which a mechanical device that is only about four inches long and a fraction of an inch thick can record whatever we like, play it back to us through a tiny earpiece, and rest comfortably in a pocket when not in use; in which space flight has been invented but is rarely used by humans, who have lost interest in it after the initial decades of excitement; in which hand-held or easily portable computers are a commonplace item; in which literature can hardly be distinguished from film in the public mind; and in which some members of society long fruitlessly for a past era when all such developments were unknown and almost inconceivable.
We do, in fact, live in such a world, but I mean something else. The above description, detail by detail, exactly characterizes the world of Isaac Asimov’s The End of Eternity, a science-fiction novel set mainly in the 482nd, 575th, and 2456th centuries. What is remarkable is that Asimov’s book first appeared in print in 1955.
For those of you who were not around then (and I barely wasI was three at the time), let me assure you that none of the present-day realities mentioned in my first paragraph was even a mote in a scientist’s eye. In 1955, the year my family moved to Palo Alto, my father had just started working for IBM, where he helped develop the huge mainframe computer that would eventually become the great-great-great-grandfather of Macs and PCs alike. By 1966 or 1967, when I began reading Isaac Asimov novels, a version of that mainframe had recently become available for use in a few high-school computing classes, so that some of us in the Palo Alto school system were taught how to inscribe the punch-cards that fed into the mechanical mawa process so inhuman and alienating and difficult, so resolutely digital in its outlook, that I was determined never to have anything to do with computers again. This resolve disintegrated in about 1983, when I purchased my first “personal computer,” a boxy Kaypro whose 64-kilobyte RAM was laughably minute by today’s standards, but whose CPU was nonetheless more powerful (or so the salesman told me) than the massive computer that first flew a man to the moon in 1969. And this is not to speak of laptops, cell phones, flashdrives, iPods, DVDs, Kindles, and all the other devices which only came into widespread use in the last decade or two. Asimov thought all this would take many centuries; instead, it took less than two generations. Yet if he was wrong about the timing, he was fantastically right about not only the inventions themselves, but the effect they would have on society.
Part of the pleasure of reading old science fiction is precisely this: with the special powers vested in you by historical hindsight, you can compare the playfully visionary forecasts with what actually took place. This puts you rather in the position of Asimov’s “Eternals,” the characters in The End of Eternity who stand outside of time, observing and controlling the vast majority who still live within it. The Eternals, contrary to what their name suggests, do not live forever; they age and die just as normal people do. But they have such extensive powers of technical analysis (their highest-ranked functionaries are called Computers, who are superior to Sociologists, who are above Technicians) that they are capable of predicting what will happen to any individual human or group of humans. And because they also have at their beck-and-call an easy form of time travelconsisting of “kettles” that whizz along preset pathways in the fourth dimension, taking them many centuries “upwhen” or “downwhen”they can actually enter into history at specific points in time and repeatedly change it. These so-called Reality Changes might involve something as small as moving a container from one shelf to another, or as large as engineering the deaths of a dozen people in a crash. The aim is always to produce the Maximum Desired Response (M.D.R.) with the Minimum Necessary Change (M.N.C.): to insure, in short, that the unpleasant or anti-social or generally disruptive event does not occur, and thus to keep mankind in a state of comfortable if slightly dull equilibrium.
Though technology is what makes this kind of reality-control possible, only a human being is capable of finding exactly the right moment and method of change. “Mechanical computing would not do,” Asimov’s typically invisible, intangible narrator tells us. “The largest Computaplex ever built, manned by the cleverest and most experienced Senior Computer ever born, could do no better than to indicate the ranges in which the M.N.C. might be found. It was then the Technician, glancing over the data, who decided on an exact point in that range. A good Technician was rarely wrong. A top Technician was never wrong.” And then, in the kind of portentous single-sentence paragraph in which science fiction delights, Asimov adds: “Harlan was never wrong.”
Harlan is our hero, a man whose “homewhen,” or time of origin, is the 95th century, but who as a teenager was lifted out of Time to become one of the Eternals. (Forgive methe capitals are all Asimov’s.) Like all Eternals, he can never go back to his own century, not only because the rules forbid it, but because if he went back he would, like Jimmy Stewart in It’s a Wonderful Life, find everything horribly changed; he would learn that he had never had a home or a mother or an existence of any kind, because the ongoing series of Reality Changes (some, perhaps, implemented by himself) would have wiped him off the record. So instead he travels light, moving from one century to another, putting in the fix as needed, obeying his superiors, and only occasionally wondering why life is structured the way it is and whether Eternity really lasts forever. (Apparently it doesn’t: even the Eternals cannot get into the “hidden” centuries between the 70,000th and the 150,000th, and when they enter the system after that, all they find is a dead, uninhabited, featureless world.)
I won’t go any further into the plot of this novel. If you have never been a science-fiction fan, I will long since have lost you anyway. But if you ever were a fanas I was, quite obsessively, in my teensyou cannot do better than to return to the works of Isaac Asimov. Cheesy as the love story inevitably is, and inconsistent as some of the time-related logic turns out to be (why, for instance, does Harlan have to cancel an appointment in the 575th century in order to go to the 3000th and see a man who is “free this afternoon,” when normal logic tells us he could have gone and returned in a matter of minutes, or even seconds?), the essential storyline has a deeply compelling quality that isto me, at leastirresistible. As I approached the end of this novel, I found myself agitatedly turning pages in the way I always do in the last hundred pages of a Henry James novel (even, I’ll confess, a Henry James novel I have read before). And, as in a James novel, the propulsive force is a desire to find out how things turn out for these deeply knowing but finally helpless characters, who are up against moral dilemmas they can’t easily solve, and who are impeded in their attempted solutions by people who are socially and economically more powerful than they are.
The End of Eternity may be one of Asimov’s better novels, but it follows the same essential pattern as all his others, as I discovered when I went back recently to reread Foundation’s Edge and The Robots of Dawn. Like all obsessive writers, Isaac Asimov is a victim of the repetition compulsion, reproducing a single novel over and over again in all its myriad forms. His goes something like this: An individual with good powers of analysis and logic, as well as a great deal of modestly worn courage, confronts a gigantic system that is out to thwart him because he threatens, wittingly or unwittingly, to bring about its downfall. In the course of his efforts, he has to rely on other people without knowing for sure which ones are his friends or lovers and which his enemies or betrayers. He is good at crossing cultural boundaries and even interacting with other life forms (some of Asimov’s most touching relationships are those between human and robot), but he retains a stubborn, almost curmudgeonly affection for the values and sensations of his own home place. Generally this place is Earth, and even when it is not, he and his entire culture have a kind of residual nostalgiathough also a civilized man’s anti-primitive aversion, or an adult’s anti-infantile onefor that long-lost homeland, that long-gone birthplace of the human race.
One of the advantages of looking back on Asimov’s work from the remove of several decades, not to mention the turn of a century, is that one can see how deeply enmeshed he was in the history of his own time. He was the child of Russian emigrants who left the Soviet Union for America in 1923, just three years after their son Isaac was born; and one can, if one chooses, view his whole science-fiction oeuvre as a recapitulation of the Soviet experiment and the Cold War reaction to it. Yet these novels, although they wear their anti-totalitarian garb as prominently as Orwell’s ever did, are unlikely ever to be kidnapped by the right, for the simple reason that all the individualistic, novelty-mongering American virtues are countered in Asimov’s workand sometimes outweighedby their opposites: that is, a belief in collective effort, a passion for history, and an ineradicable pessimism about the prospects for human progress. For Asimov, super-civilization and technological achievement always go hand-in-hand with a general softening or attenuation of the human spirit, and it is only by getting back to basics (or intuition, or felt sensation) that people can continue to move ahead. It is an essentially nostalgic view, and as such it is profoundly Russian, however much Asimov may have felt himself to be a fully fledged citizen of his new country.
The author’s note attached to the 2010 reissue of The End of Eternity tells us that Isaac Asimov, in addition to writing vast quantities of science fiction, “taught biochemistry at Boston University School of Medicine and wrote detective stories and nonfiction books on Shakespeare, the Gilbert and Sullivan operettas, biochemistry, and the environment. He died in 1992.” But if we go back a mere quarter of a century or so, to the 1984 Ballantine paperback of The Robots of Dawn, we can locate ourselves at a moment when the author himself (not to mention Ballantine Books) was still with us. In the author’s note to that book, we learn that “at the present time, he has published over 260 books, distributed through every major division of the Dewey system of library classification, and shows no signs of slowing up. He remains as youthful, as lively, and as lovable as ever, and grows more handsome with each year. You can be sure that this is so since he has written this little essay himself and his devotion to absolute objectivity is notorious.” If you are one of those people who, like myself, remains committed to the primitive, cellulose-based habits of reading, the pages on which you read this will be yellowed and flaking; but the voice will be as strong and as vitally alive as ever. Now, that’s what I call time travel.
Wendy Lesser, the founding editor of The Threepenny Review, is the author of eight books. Her ninth, a life of Dmitri Shostakovich as viewed through his string quartets, will be out from Yale in 2011. Home Page • Current Issue • Past Issues • Reading Room • Gallery
Books • Links • Advertising • Submissions • Subscribe • Contact Us • Donate | 科技 |
2016-40/4016/en_head.json.gz/6337 | Intelligent InSites Announces the Appointment of Shane Waslaski as President and Chief Executive Officer
FARGO, N.D. --(Business Wire)-- Intelligent InSites, Inc., the leading provider of operational intelligence for healthcare, today announced the appointment of Shane Waslaski as President and Chief Executive Officer. Waslaski will also join the Board of Directors of Intelligent InSites.
Waslaski brings a proven leadership background in complex operations across the healthcare, manufacturing, infrastructure, and transportation industries, and he has served in various executive roles throughout his career, including President, CEO, COO, and Senior Vice President.
Most recently, Waslaski was President of Varistar, where he led a manufacturing and infrastructure business portfolio with 1,600 employees and revenues of more than $500M. Varistar is a management and holding subsidiary of Otter Tail Corporation (NASDAQ: OTTR), where he also served as a corporate officer and Senior Vice President.
Prior to these roles, Waslaski held operational leadership positions in healthcare delivery organizations and consulted for a wide range of healthcare customers, including hospitals, clinics, and insurers.
"In our search for the next leader of Intelligent InSites, we sought a seasoned and dynamic executive with an exceptional understanding of healthcare operations," said Doug Burgum, Executive Chairman of the Board and Interim President and CEO of Intelligent InSites. "Shane brings the ideal blend of proven leadership, customer focus, and operational intelligence, along with a breadth of expertise in lean principles and efficient healthcare delivery models."
"I am delighted to join Intelligent InSites at this exciting time in the industry and in the company's trajectory," said Shane Waslaski, the newly appointed President, Chief Executive Officer, and Board Member of Intelligent InSites. "Intelligent InSites is setting operational intelligence standards for the healthcare industry-enabling healthcare providers to ensure the safest and mosteffective health delivery, in which the patient's experience is unmarred by waste, inefficiency, and bureaucracy. It will be a distinct honor to work with this talented team and our dedicated partners as we continue bringing our proven solution to this rapidly transforming industry."
Waslaski's appointment follows Intelligent InSites' recent funding announcement from Health Insight Capital, a subsidiary of HCA, the world's largest for-profit operator of healthcare facilities. Intelligent InSites' customers also include the Veterans Health Administration, the nation's largest integrated healthcare system, which is implementing the Intelligent InSites operational intelligence solution across 152 VA Medical Centers and seven Consolidated Medical Outpatient Pharmacy facilities, and Sanford Health, the nation's largest not-for-profit rural healthcare system. Intelligent InSites is also expanding internationally, with customers such as Bumrungrad International Hospital, the first Asian hospital accredited by Joint Commission International.
After transitioning President and CEO duties to Waslaski, Burgum will remain as Executive Chairman of the Board of Directors of Intelligent InSites.
About Intelligent InSites (www.intelligentinsites.com/)Intelligent InSites helps transform healthcare with real-time operational intelligence that improves care, enhances the human experience, and increases efficiency. Through its open, real-time, healthcare platform, Intelligent InSites automatically collects and processes data from multiple data sources such as EHRs, financial systems, building systems, sensory and real-time location systems (RTLS), mobility solutions, and other healthcare IT solutions-then provides actionable intelligence to achieve cost savings, operational excellence, and better care. By utilizing the enterprise-wide architecture of the Intelligent InSites platform, healthcare systems can leverage all legacy, current, and future data sources to optimize their technology investments across the entire organization, then have the flexibility to meet changing organizational, regulatory, and compliance needs.
Intelligent InSites and the Intelligent InSites logo are trademarks or registered trademarks of Intelligent InSites and/or its affiliates in the U.S. and other countries. Third-party trademarks mentioned are the property of their respective owners. | 科技 |
2016-40/4016/en_head.json.gz/6340 | 'Big Bang Theory' secret weapon? The real-life science guy
Randee Dawn TODAY
GREG GAYNE / WARNER BROS.
Dr. David Saltzberg.
Writers on “The Big Bang Theory” were in a scientific pickle at the end of last season. They were sending Leonard (Johnny Galecki) on a research expedition out in the ocean, but needed to link what he would be studying with theoretical physicist Stephen Hawking. The problem? Hawking deals with black holes and quantum mechanics, not so much the deep blue sea.
Dr. David Saltzberg to the rescue! A professor of physics and astronomy at the University of California, Dr. Saltzberg is not a writer, director, producer or actor — but he is one of the hit sitcom’s most valuable players: Their science adviser. Thanks to him, the show not only looks smart and savvy about science matters, but is actually smart, savvy and accurate about science matters.
And as the new season’s first episode will reveal on Sept. 26, Saltzberg found what he says is the “one unique solution” that would combine Hawking with oceanography (it has to do with replicating certain aspects of black holes under water). “So that’s what Leonard is off studying,” Saltzberg told TODAY.
For the past seven years, Saltzberg has been coming up just those sorts of real-science solutions for the series. He’ll suggest plots like the one above, but just as often he's behind the jargon and scientific theories that pop up in the show’s dialogue. When he gets a script, it may include several lines that read “SCIENCE TO COME.” That’s his cue to start plugging holes with the real deal.
And he’s not working for science rookies, he knows that: “The creators of the show really love science — they read up on it, and they know a lot about it. So sometimes they’ll try their hand at it, and sometimes it’s completely correct and sometimes I offer a little tweak," he said.
“I’m happy to get the science right,” he added. “It’s a lot of fun — though I live in fear of someone finding a mistake that gets through.”
Which happens, but only rarely, says Saltzberg: “Maybe at the 1 percent level we’re still making mistakes — years ago, they were talking about how the beat of cricket chirps depends on the outside temperature, and how this was discovered by Amos Dolbear. We got his first name wrong on the show and said it was ‘Emil,’ not ‘Amos.’ Then I was contacted by (Dolbear's) great-grandson about the mistake, so we invited him to the show to make up for it.”
In addition to his contributions to story and script, Saltzberg is responsible for the equations that appear on the show’s whiteboard, and he says he’s lent equipment like vacuum tubes and flasks to the show. He might get a call after "Theory" has finished filming an episode and is in post-production, asking if an experiment they've filmed actually looks right. And he’s on hand at every taping just in case last-minute questions crop up, sometimes bringing a young physicist or even a Nobel Prize winner to the set with him. Saltzberg is even kind of a celebrity himself now, though mainly within the scientific community. “It’s funny that when I go to a conference, more people ask me about the show than how my work is going,” he chuckled. “I still have my day job.”
And as it turns out, he’s not alone on the series as its resident scientist: Actress Mayim Bialik, who plays Sheldon’s girlfriend Amy, has a PhD in neuroscience — and plays a neuroscientist on "Theory." So while that’s not Saltzberg’s field of expertise, he doesn't have to worry. “Mayim will never utter an error about neuroscience on that show,” he said. “She totally has my back.”
“The Big Bang Theory” returns Thursday, Sept. 26 at 8 p.m. on CBS. | 科技 |
2016-40/4016/en_head.json.gz/6343 | What is GPS?
The history of GPS
Who uses GPS?
Alternative satellite systems
How it all beganIn 1957 the former USSR launched the first man-made satellite: Sputnik 1. Scientists quickly realised first that by using the Doppler Effect one could work out a satellite’s orbit. Then that by turning it round, you can use satellites to work out the position of a receiver on earth.The foundations of the modern GPS were laid during the early 1960’s by the US military. The Navy, Air Force and Army each came up with their own designs and ideas, and eventually in 1973 a design that incorporated elements from each was approved by the US government. This was to become NAVSTAR. The first satellite for the new NAVSTAR GPS was launched in 1974 and from 1978 to 1985 another 11 were launched for testing purposes. The full constellation of 24, that today allows your navigation system to enjoy worldwide GPS coverage, was completed in 1993. GPS for everyoneInitially GPS was only intended for military use. But then tragedy struck. On 1st September 1983, Korean Airlines flight KAL007 from Anchorage to Seoul strayed off course into USSR airspace and was shot down by a soviet Su-15 fighter jet. All 269 passengers and crew were killed. Two weeks later, US President Reagan proposed GPS be made available for civilian use to avoid navigational error ever again leading to such a catastrophe. While by no means the only reason, the Korean Airlines disaster was certainly a major catalyst toward civilian access to GPS. Selective Availability (SA)Having spent some $12 billion to develop the most accurate navigation system in the world, the US government then built a function called Selective Availability (SA) into NAVSTAR that would limit its accuracy for civilian users to ensure no enemy or terrorist group could use GPS to make accurate weapons. It worked by introducing deliberate errors into the data broadcast by each satellite. Military users could access the fully accurate system by decrypting a secured second frequency that was broadcast simultaneously. Then during the Gulf War the US military needed many more GPS receivers than it had. It solved the problem by using civilian GPS receivers. But to increase the accuracy of these devices, the SA function had to be temporarily disabled. Then in 2000, US President Clinton announced that SA would be disabled completely. Because US government ‘threat assessments’ concluded that removing SA would have minimal impact on national security. Though in the same speech he said the US would still be able to ‘selectively deny’ GPS signals on a regional basis when national security was threatened. << >>
What The Doppler Effect is? Read more | 科技 |
2016-40/4016/en_head.json.gz/6356 | Chicken of the Sea: Stop Ripping Up the Sea
Our hunger for canned tuna, one of the last remaining wild foods, is sending many species to the brink of extinction. Overfishing runs rampant in all oceans from the Pacific to the Indian Oceans, and countless animals such as sharks, turtles, rays and baby fish of many types are killed every year due to the behavior of the global tuna industry.
Canned tuna is an affordable source of protein that many families depend on. But if companies like
Chicken of the Sea don't change the way they're fishing for tuna, we will lose the ocean ecosystems they
depend on, and scarcity will cause the price to skyrocket.
The United States is the world's largest market for canned tuna, and we consume more than six million
tons of it a year. As such, it is our responsibility to expose the behavior taking place in the shadows of
the tuna industry – and Chicken of the Sea is one of the worst offenders.
Chicken of the Sea is directly responsible for a tremendous amount of ocean destruction. Countless
sharks, billfish, baby yellowfin tuna, and juvenile bigeye tuna (a species listed as at high risk for
extinction by the International Union for Conservation of Nature ) are killed by their use of fish
aggregating devices (FADs) with purse seine nets in the production of "chunk light" tuna. Learn more.
Thousands of turtles, sea birds, sharks, and other animals are slaughtered on indiscriminate
conventional longlines in the company's hunt for albacore – or "solid white" – tuna. Learn more.
There is a better way. In other countries, such as the United Kingdom, the large tuna companies have
already switched to more sustainable practices. In fact, Thai Union Group – the parent company of
Chicken of the Sea – has already agreed to shift to sustainable canned tuna under its UK brand John
West, but refuses to do the same here in the United States. Why does Chicken of the Sea feel that
American consumers don't deserve access to responsible, ethical tuna products?
Take action and stand with tens of thousands of other concerned consumers demanding that Chicken of the Sea stop ripping up the sea! "The balance of power between the fishing fleets and tuna has shifted too far in favor of the fleets.”"
- Professor Callum Roberts, University of York sign up | 科技 |
2016-40/4016/en_head.json.gz/6390 | pop up description layer HOME Cryptozoology UFO Mysteries Aviation Space & Time Dinosaurs Geology Archaeology Exploration 7 Wonders Surprising Science
Troubled History Library Laboratory Attic Theater Store Index/Site Map
Cyclorama Custom Search
Tweet E-mail this page link to a friend
Enter friend's e-mail: Requires javascript The Mausoleum at Halicarnassus
The Mausoleum at the ancient city of Halicarnassus was the tomb of the king, Mausolus. (Copyright Lee Krystek, 2011)
In 377 B.C., the city of Halicarnassus was the capitol of a small kingdom along the Mediterranean coast of Asia Minor. It was in that year the ruler of this land, Hecatomnus of Mylasa, died and left control of the kingdom to his son, Mausolus. Hecatomnus, a local satrap to the Persians, had been ambitious and had taken control of several of the neighboring cities and districts. Then Mausolus during his reign extended the territory even further so that it eventually included most of southwestern Asia Minor. Mausolus, with his queen Artemisia, ruled over Halicarnassus and the surrounding territory for 24 years. Though he was descended from the local people, Mausolus spoke Greek and admired the Greek way of life and government. He founded many cities of Greek design along the coast and encouraged Greek democratic traditions. Mausolus's Death Seven Quick Facts
Location: Halicarnassus (Modern Bodrum, Turkey)
Built: Around 350 B.C.
Function: Tomb for the City King, Mausolus
Destroyed: Damaged by earthquakes in 13th century A.D. . Final destruction by Crusaders in 1522 A.D.
Size: 140 feet (43m) high.
Made of: White Marble
Other: Built in a mixture of Egyptian, Greek and Lycian styles
Then in 353 B.C. Mausolus died, leaving his queen Artemisia, who was also his sister, broken-hearted (It was the custom in Caria for rulers to marry their own sisters). As a tribute to him, she decided to build him the most splendid tomb in the known world. It became a structure so famous that Mausolus's name is now associated with all stately tombs throughout the world through the word mausoleum. The building, rich with statuary and carvings in relief, was so beautiful and unique it became one of the Seven Wonders of the Ancient World. Artemisia decided that no expense was to be spared in the building of the tomb. She sent messengers to Greece to find the most talented artists of the time. These included architects Satyros and Pytheos who designed the overall shape of the tomb. Other famous sculptors invited to contribute to the project were Bryaxis, Leochares, Timotheus and Scopas of Paros (who was responsible for rebuilding the Temple of Artemis at Ephesus, another of the wonders). According to the historian Pliny Bryaxis, Leochares, Timotheus and Scopas each took one side of the tomb to decorate. Joining these sculptors were also hundreds of other workmen and craftsmen. Together they finished the building in the styles of three different cultures: Egyptian, Greek and Lycian. The tomb was erected on a hill overlooking the city. The whole structure sat in the center of an enclosed courtyard on a stone platform. A staircase, flanked by stone lions, led to the top of this platform. Along the outer wall of the courtyard were many statues depicting gods and goddesses. At each corner stone warriors, mounted on horseback, guarded the tomb. A map of the city of Halicarnassus drawn by the archeologist J D Barbi� du Bocage in 1802 showing the tomb in the middle of the city.
At the center of the platform was the tomb itself. Made mostly of marble, the structure rose as a square, tapering block to about one-third of the Mausoleum's 140 foot height. This section was covered with relief sculpture showing action scenes from Greek myth/history. One part showed the battle of the Centaurs with the Lapiths. Another depicted Greeks in combat with the Amazons, a race of warrior women. On top of this section of the tomb thirty-six slim columns rose for another third of the height. Standing in between each column was another statue. Behind the columns was a solid block that carried the weight of the tomb's massive roof. The roof, which comprised most of the final third of the height, was in the form of a stepped pyramid with 24 levels. Perched on top was the tomb's penultimate work of sculpture craved by Pytheos: Four massive horses pulling a chariot in which images of Mausolus and Artemisia rode. The City in Crisis Soon after construction of the tomb started Artemisia found herself in a crisis. Rhodes, an island in the Aegean Sea between Greece and Asia Minor, had been conquered by Mausolus. When the Rhodians heard of his death, they rebelled and sent a fleet of ships to capture the city of Halicarnassus. Knowing that the Rhodian fleet was on the way, Artemisa hid her own ships at a secret location at the east end of the city's harbor. After troops from the Rhodian fleet disembarked to attack, Artemisia's fleet made a surprise raid, captured the Rhodian fleet, and towed it out to sea. Video: In Honor of the King: The Mausoleum at Halicarnassus
Artemisa put her own soldiers on the invading ships and sailed them back to Rhodes. Fooled into thinking that the returning ships were their own victorious navy, the Rhodians failed to put up a defense and the city was easily captured, quelling the rebellion. Artemisa lived for only two years after the death of her husband. Both would be buried in the yet unfinished tomb. According to Pliny, the craftsmen decided to stay and finish the work after their patron died "considering that it was at once a memorial of their own fame and of the sculptor's art." The Mausoleum overlooked the city of Halicarnassus for many centuries. It was untouched when the city fell to Alexander the Great in 334 B.C. and was still undamaged after attacks by pirates in 62 and 58 B.C.. It stood above the city ruins for some 17 centuries. Then a series of earthquakes in the 13th century shattered the columns and sent the stone chariot crashing to the ground. By 1404 A.D. only the very base of the Mausoleum was still recognizable. Destruction by the Crusaders Crusaders, who had little respect for ancient culture, occupied the city from the thirteen century onward and recycled much of the building stone into their own structures. In 1522 rumors of a Turkish invasion caused Crusaders to strengthen the castle at Halicarnassus (which was by then known as Bodrum) and some of the remaining portions of the tomb were broken up and used within the castle walls. Indeed, sections of polished marble from the tomb can still be seen there today. Another interpretation of the Mausoleum. Copyright Lee Krystek, 1998 At this time a party of knights entered the base of the monument and discovered the room containing a great coffin. Deciding it was too late to open it that day, the party returned the next morning to find the tomb, and any treasure it may have contained, plundered. The bodies of Mausolus and Artemisia were missing, too. The Knights claimed that Moslem villagers were responsible for the theft, but it is more likely that some of the Crusaders themselves plundered the graves. Before grounding much of the remaining sculpture of the Mausoleum into lime for plaster, the Knights removed several of the best works and mounted them in the Bodrum castle. There they stayed for three centuries. At that time the British ambassador obtained several of the statutes from the castle, which now reside in the British Museum. Remains Located by Charles Newton
In 1846 the Museum sent the archaeologist Charles Thomas Newton to search for more remains of the Mausoleum. He had a difficult job. He didn't know the exact location of the tomb, and the cost of buying up all the small parcels of land in the area to look for it would have been astronomical. Instead, Newton studied the accounts of ancient writers like Pliny to obtain the approximate size and location of the memorial, then bought a plot of land in the most likely location. Digging down, Newton explored the surrounding area through tunnels he dug under the surrounding plots. He was able to locate some walls, a staircase, and finally three of the corners of the foundation. With this knowledge, Newton was able to figure out which additional plots of land he needed to buy. Marble from the tomb can still be seen in Bodrum Castle even today. (Released into public domain by Horvat)
Newton then excavated the site and found sections of the reliefs that decorated the wall of the building and portions of the stepped roof. Also, a broken stone chariot wheel from the sculpture on the roof, some seven feet in diameter, was discovered. Finally, he found two statues which he believed were the ones of Mausolus and Artemisia which had stood at the pinnacle of the building. Ironically, the earthquake the toppled them to the ground saved them. They were hidden under sediment and thus avoided the fate of being pulverized into mortar for the Crusaders castle. Today these works of art stand in the Mausoleum Room at the British Museum. There the images of Mausolus and his queen forever watch over the few broken remains of the beautiful tomb she built for him. Copyright 2011 Lee Krystek. All Rights Reserved. Related Links Seven Ancient Wonders
Seven Natural Wonders
Seven Modern Wonders
Great Pyramid
City of Troy
Odd Archeology | 科技 |
2016-40/4016/en_head.json.gz/6488 | IPCC will show 'we are the volcanoes'
By: Gwynne DyerPosted: 09/27/2013 1:00 AM
This article was published 26/9/2013 (1100 days ago), so information in it may no longer be current. Campaign strategist James Carville coined the phrase "It's the economy, stupid" to focus the attention of campaign workers on the one key issue that would get Bill Clinton elected president in the 1992 U.S. election. Alas, the authors of the Fifth Assessment of the Intergovernmental Panel on Climate Change (IPCC), published today, have no such sage to guide them. They'll have to make do with me.The 800-odd authors of the report are selected by their fellow scientists in the various disciplines relevant to climate change as the acknowledged leaders in their field of study. Their job was to review all 14,000 scientific papers on climate change published in the past five years. And they are doing this work at the behest of the world's governments, not as some random pressure group; it is the Intergovernmental Panel on Climate Change.
Scientists are very cautious people. They won't go one millimetre beyond what the evidence makes indisputable, knowing that they will be attacked by rival scientists if they do. They are much more comfortable talking about probabilities rather than certainties. They are, in other words, a nightmare for journalists who have to transmit their findings to the world.Of the nearly 100 scientists I have interviewed on climate change over the past five years, not one doubted that global warming is a big and frightening problem. Indeed, there was often an undercurrent of panic in their remarks. But when it comes to writing official reports, they retreat into science-speak.So the Second Assessment of the IPCC, published in 1995, said that it was more than 50 per cent likely that human emissions of greenhouse gases were contributing to global warming. The Third Assessment, in 2001, raised the likelihood to 66 per cent. The Fourth, in 2007, upped the ante to 90 per cent, and the Fifth, this week, says 95 per cent.But how do you make a headline out of that? How much warming? How fast? And with what effects on human beings? The latest report will run, in its final version, to 3,000 pages, and the answers are buried among the statistics. What would Jim (Carville) do? He'd say: it's the feedbacks, stupid.Without the feedbacks, we could go on burning fossil fuels and cutting down the forests, and the average global temperature would creep up gradually, but so slowly that most of the inhabited parts of the planet would stay livable for a long time. But if we trigger the feedbacks, the whole thing goes runaway.The feedbacks are natural sources of warming that we activate by raising the average global temperature just a modest amount with our own greenhouse gas emissions. The consensus number used to be plus 2 degrees Celsius, but some scientists now argue that the real threshold may be as low as +1.5 degrees C. There are three main feedbacks.As the highly reflective ice and snow that covers most of the polar regions melts, the rate at which the sun's heat is absorbed goes up steeply over a large part of the planet. We are creating a new warming engine that will shift the planet's heat balance, and once it has started we can't turn it off again.There is reason to believe that it's already too late to avoid this one. The protective covering of floating ice that has shielded the Arctic Ocean from solar heating for so long is now going fast, and we will probably see an ice-free Arctic Ocean in the August-September period as early as the 2020s. Mercifully, this is the smallest of the three major feedbacks in terms of its impact -- but it triggers a bigger one.The warmer air and water in the Arctic then starts to melt the permanently frozen ground and coastal seabed (permafrost) that extends over more than ten million square kilometres of territory, a considerably larger area than Australia. This melting releases a huge amount of methane that has been locked into the ground for millions of years. Methane is a far more effective warming agent than carbon dioxide, and so we spin closer to runaway.Finally the oceans, as they warm, release some of the vast quantities of carbon dioxide they absorbed in the past, simply because warmer water can contain less dissolved gas. Most of the excess heat in the Earth system has been going into the oceans in the past few decades, which is why the rise in land temperatures seems to have slowed down. But that is no real consolation: it just means that the biggest feedback is also being activated.Those are the killer feedbacks. Earth has lurched suddenly into a climate 5-6 degrees C higher than now a number of times in the past. The original warming usually came from massive, long-lasting volcanic eruptions that put a large amount of CO2 into the atmosphere -- but in every case it was feedbacks like these that carried the planet up into a temperature regime where there was a massive dieback of animals and plants.We are the volcanoes now. Our own emissions would take a long time to get us up to really high average temperatures worldwide, but all we have to do is pull the trigger on the feedbacks. The rest is automatic. Gwynne Dyer is an independent journalist whose articles are published in 45 countries. | 科技 |
2016-40/4016/en_head.json.gz/6538 | The articles below, detailing a search for a Planet X, or the 10th planet in our solar system, are speaking of the same planet Sitchin calls the 12th Planet. In his book, The 12th Planet, Sitchin explains that the ancient Sumerians counted the Sun and the Earth's moon as planets, and thus the Sun, Earth, Moon, Mercury, Venus, Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto added up to 11 planets. Modern astronomy excludes the Sun and the Earth's Moon, counting only 9 planets in our known solar system. Astronomy
Search for the Tenth Planet
Astronomers are readying telescopes to probe the outer reaches of our solar system for an elusive planet much larger than Earth. Its existence would explain a 160-year-old mystery. ... The pull exerted by its gravity would account for a wobble in Uranus' orbit that was first detected in 1821 by a French astronomer, Alexis Bouvard. Beyond Pluto, in the cold, dark regions of space, may lie an undiscovered tenth planet two to five times the size of Earth. Astronomers at the U.S. Naval Observatory (USNO) are using a powerful computer to identify the best target zones, and a telescopic search will follow soon after. ... Van Flandern thinks the tenth planet may have between two and five Earth masses and lie 50 to 100 astronomical units from the Sun. (An astronomical unit is the mean distance between Earth and the Sun.) His team also presumes that, like Pluto's, the plane of the undiscovered body's orbit is tilted with respect to that of most other planets, and that its path around the Sun is highly elliptical. New York Times
A pair of American spacecraft may help scientists detect what could be a 10th planet or a giant object billions of miles away, the national Aeronautics and Space Administration said Thursday. Scientists at the space agency's Ames Research Center said the two spacecraft, Pioneer 10 and 11, which are already farther into space than any other man-made object, might add to knowledge of a mysterious object believed to be beyond the solar system's outermost known planets. The space agency said that persistent irregularities in the orbits of Uranus and Neptune "suggest some kind of mystery object is really there" with its distance depending on what it is. If the mystery object is a new planet, it may lie five billion miles beyond the outer orbital ring of known planets, the space agency said. If it is a dark star type of objet, it may be 50 billion miles beyond the known planets; if it is a black hole, 100 billion miles. A black hole is a hypothetical body in space, believed to be a collapsed star so condensed that neither light nor matter can escape from its gravitational field. Newsweek
Does the Sun Have a Dark Companion?
When scientists noticed that Uranus wasn't following its predicted orbit for example, they didn't question their theories. Instead they blamed the anomalies on an as yet unseen planet and, sure enough, Neptune was discovered in 1846. Now astronomers are using the same strategy to explain quirks in the orbits of Uranus and Neptune. According to John Anderson of the Jet Propulsion Laboratory in Pasadena, Calif., this odd behavior suggests that the sun has an unseen companion, a dark star gravitationally bound to it but billions of miles away. ... Other scientists suggest that the most likely cause of the orbital snags is a tenth planet 4 to 7 billion miles beyond Neptune. A companion star would tug the outer planets, not just Uranus and Neptune, says Thomas Van Flandern of the U.S Naval Observatory. And where he admits a tenth planet is possible, but argues that it would have to be so big - a least the size of Uranus - that it should have been discovered by now. To resolve the question, NASA is staying tuned to Pioneer 10 and 11, the planetary probes that are flying through the dim reaches of the solar system on opposite sides of the sun. Astronomy
Searching for a 10th Planet
The hunt for new worlds hasn't ended. Both Uranus and Neptune follow irregular paths that observers can explain only by assuming the presence of an unknown body whose gravity tugs at the two planets. Astronomers originally though Pluto might be the body perturbing its neighbors, but the combined mass of Pluto and its moon, Charon, is too small for such a role. ... While astronomers believe that something is out there, they aren't sure what it is. Three possibilities stand out: First, the object could be a planet - but any world large and close enough to affect the orbits of Uranus and Neptune should already have been spotted. Searchers might have missed the planet, though, if it's unusually dark or has an odd orbit. ... NASA has been recording velocities for a year now and will continue for as long as necessary. This past spring, it appeared that budget cuts might force the end of the Pioneer project. The space agency now believes that it will have the money to continue mission operations. Next year, the JPL group will begin analyzing the data. By the time the Pioneer experiment shows results, an Earth-orbiting infrared telescope may have discovered the body. ... Together, IRAS and the Pioneers will allow astronomers to mount a comprehensive search for new solar system members. The two deep space probes should detect bodies near enough to disturb their trajectories and the orbits or Uranus and Neptune. IRAS should detect any large body in or near the solar system. Within the next year or two, astronomers may discover not one new world, but several.
Something out there beyond the farthest reaches of the known solar system seems to be tugging at Uranus and Neptune. Some gravitational force keeps perturbing the two giant planets, causing irregularities in their orbits. The force suggests a presence far away and unseen, a large object that may be the long- sought Planet X. ... The last time a serious search of the skies was made it led to the discovery in 1930 of Pluto, the ninth planet. But the story begins more than a century before that, after the discovery of Uranus in 1781 by the English astronomer and musician William Herschel. Until then, the planetary system seemed to end with Saturn.
As astronomers observed Uranus, noting irregularities in its orbital path, many speculated that they were witnessing the gravitational pull of an unknown planet. So began the first planetary search based on astronomers predictions, which ended in the 1840's with the discovery of Neptune almost simultaneously by English, French, and German astronomers. But Neptune was not massive enough to account entirely for the orbital behavior of Uranus. Indeed, Neptune itself seemed to be affected by a still more remote planet. In the last 19th century, two American astronomers, Willian H. Pickering and Percival Lowell, predicted the size and approximate location of the trans-Neptunian body, which Lowell called Planet X. Years later, Pluto was detected by Clyde W. Tombaugh working at Lowell Observatory in Arizona. Several astronomers, however, suspected it might not be the Planet X of prediction. Subsequent observation proved them right. Pluto was too small to change the orbits of Uranus and Neptune, the combined mass of Pluto and its recently discovered satellite, Charon, is only 1/5 that of Earth's moon.
Recent calculations by the United States Naval Observatory have confirmed the orbital perturbation exhibited by Uranus and Neptune, which Dr. Thomas C Van Flandern, an astronomer at the observatory, says could be explained by "a single undiscovered planet". He and a colleague, Dr. Richard Harrington, calculate that the 10th planet should be two to five times more massive than Earth and have a highly elliptical orbit that takes it some 5 billion miles beyond that of Pluto - hardly next-door but still within the gravitational influence of the Sun. ...
US News World Report
Planet X - Is It Really Out There?
Shrouded from the sun's light, mysteriously tugging at the orbits of Uranus and Neptune, is an unseen force that astronomers suspect may be Planet X - a 10th resident of the Earth's celestial neighborhood. Last year, the infrared astronomical satellite (IRAS), circling in a polar orbit 560 miles from the Earth, detected heat from an object about 50 billion miles away that is now the subject of intense speculation. "All I can say is that we don't know what it is yet," says Gerry Neugesbeuer, director of the Palomar Observatory for the California Institute of Technology. Scientists are hopeful that the one-way journeys of the Pioneer 10 and 11 space probes may help to locate the nameless body. | 科技 |
2016-40/4016/en_head.json.gz/6591 | Home Priority areas for conserving the Alboran Sea
Wed, 05 Aug 2009 Among the different marine areas of the Mediterranean Sea, the Alboran Sea represents a unique element given its ecological wealth, due to its blend of Atlantic and Mediterranean marine biodiversity, its socioeconomic importance, and its geopolitical location between Europe and Africa. The adequate management of this space, and in particular the conservation of certain areas, is essential for the three bordering countries of the Alboran Sea: Algeria, Morocco, and Spain.
Photo: IUCN-Med
The Med-RAS initiative for the Identification of Priority Representative Areas and Species to be conserved in the Mediterranean Sea, developed by IUCN, aims to identify the most important habitats and species so as to adequately protect and manage them. Med-RAS considers the Alboran Sea to be a pilot area for creating a coherent network of marine protected areas (MPAs) in the Mediterranean, based on the identification of its most representative features, like habitats, species, underwater structures, and hydrological phenomena.
On 6 and 7 July 2009, researchers, members of the civil service, and experts from Spain, Morocco and the Mediterranean region took part in a workshop for the identification and planning of a network of marine protected areas in the Alboran Sea. The meeting was held at the Civic Centre in Malaga, Spain, and organised jointly by the IUCN Centre for Mediterranean Cooperation and Malaga County Council.
This workshop was a pilot experiment in the Mediterranean region to begin developing a common methodology for identifying and effectively managing MPAs, as well to promote the development of a joint programme for adequately managing the region and making concerted decisions.
The specific objectives of the workshop were:
• To identify priority areas for conservation, and define a coherent and representative network of MPAs in the Alboran Sea, based on specific criteria and biological, physical, chemical, and socioeconomic indicators.
• To review existing scientific information in order to establish this network, and evaluate its usefulness for defining ecologically and biologically important areas and species.
• To integrate biological information, interactions, and the impacts of human activities with natural processes for determining sites and conservation objectives.
During the discussions, participants decided to:
• Develop cooperation between research and management organizations from the three countries, and where necessary, with support from external experts.
• Define common criteria for identifying important sites and species, criteria which could be applied to the whole Mediterranean.
• Carry out an analysis of existing information for each important theme, define the limits of the knowledge, and propose main lines of research to fill the gaps.
• On the basis of the above, to propose priority sites for conservation, concerning all important factors (habitats, species, geological structures and hydrodynamic features).
The results of this workshop, the criteria, and the thematic analyses will be discussed during a second workshop planned for October 2009, and should be distributed or presented at international forums, such as the Meeting of the Contracting Parties to the Barcelona Convention, due to take place in Marrakech (Morocco) in November 2009.
For more information, please contact Alain Jeudy de Grissac, or Maria del Mar Otero
Work area: Protected AreasMarine Protected Areas (MPAs)MarineLocation: North AmericaMediterraneanNorth America
Publicación Alborán
The Med - RAS InitiativeDraft report: Conservation and sustainable development of the Alboran Sea: strategic elements for its future management (in preparation) | 科技 |
2016-40/4016/en_head.json.gz/6596 | Air Force consolidates cyber/IT operations to improve performance By Barry RosenbergJul 25, 2012
Maj. Gen. Craig Olson is the program executive officer for Air Force Business and Enterprise Systems, and also director of the Enterprise Information Systems Directorate at Maxwell Air Force Base, Gunter Annex, Ala. His Air Force experience includes assignments as a weapon systems, electronic warfare and flight test weapon systems officer in a number of Air Force fighters, and he spent a year as vice commander of the Aeronautical Systems Center at Wright-Patterson Air Force Base. He received his second star in June. The BES portfolio comprises approximately 1,800 military, civil service and contractor support personnel managing 130 IT systems located at five bases throughout the United States with total funding exceeding $1.3 billion. The organization acquires, operates and sustains operational support IT systems for the Air Force. The Network Centric Solutions-2 (NETCENTS-2) contract vehicle for IT purchasing falls under these organizations. In mid-August, Olson will leave PEO BES and the EIS organization to head a new Air Force PEO at Hanscom Air Force Base called Command, Control and Communications Information and Networks (C3I&N). It is part of Air Force Material Command’s restructuring, which included establishment of the Life Cycle Management Center (LCMC) at Wright-Patt. LCMC will consolidate the missions now performed by the Aeronautical Systems Center, the Electronic Systems Center at Hanscom, and the Air Armament Center at Eglin Air Force Base, Fla. In July LCMC stood down the Electronic Systems Center. Going forward, the C3I&N Directorate will be responsible for integrating cyber capabilities across all Air Force programs, according to a June 18 letter from Air Force Secretary Michael Donley to Sen. John Kerry (D-Mass.). In addition, the new PEO for C3I&N will have a seat at Donley’s new CIO Governance Board, which will direct future Air Force cyber and IT investments, according to the letter. Donley also noted that the PEO C3I&N is a two-star position, while the Electronic Systems Center commanding officer has been a three-star billet, so Congress will have the final say on what will fly regarding the organizational structure going forth. The LCMC commander will be a three-star general. Defense Systems Editor-in-Chief Barry Rosenberg interviewed Olson. DS: What's at the top of your to-do list?
Olson: As I look ahead to continuously improving our mission, I have three priorities. (The first is) strengthening the partnerships between the application development teams in our PEO, the requirement stakeholders and the infrastructure providers. I am particularly interested in the commoditization of infrastructure, which resides in the C3I&N portfolio I am transitioning to in mid-August. I also want to increase focus on our joint partners in DISA and our sister service IT acquisition organizations. There are many opportunities to increase effectiveness while saving dollars, and this will come from better partnerships across the IT enterprise.
(The second is) continuous learning. We need to incorporate proven commercial best practices while also building a base of government expertise to enable the Air Force to take a stronger lead role in managing the delivery of IT capabilities.
(The third is) the successful transition of the Business Enterprise System portfolio to my successor, Robert Shofner.
DS: What's your view of the cloud, what it can do, and security? What's PEO BES's specific programmatic role in developing military cloud computing? What of your own applications are you going to put on a cloud?
Olson: The cloud will become an ever-increasing part of IT solutions for the Air Force and DOD, but first we have to understand it and develop a CONOPS (Concept of Operations) for its use in a national security setting with an Air Force operational context. The BES Directorate is contributing by actively participating in Air Force-DISA Integrated Process Teams that manage IT infrastructure services within the cloud. Our portfolio also includes a pilot program to deliver airman readiness data via managed services, and I can foresee future requirements for other applications from the portfolio as we mature.
DS: What's the latest on NETCENTS-2?
Olson: I cannot get into specifics because of source selection sensitivities, but here is what I can say at this time. The Network Centric Solutions Program Office made contract award announcements for "Products" in April and more recently, "Application Services Small Business" in June. As is common for contracts of their size, both announcements were protested.
In the case of Products, during debriefings and subsequent protests, it became apparent that offerors may have inconsistently interpreted the requirements of the RFP as it related to completing contractual documentation and – in some aspects – the technical and price requirements. While the Air Force did not respond on the merits to any allegations nor agreed any allegations had merit, it provided notification to the GAO that corrective action would be taken to ensure fairness to all offerors and to ensure the best products for the Air Force will be obtained at fair and reasonable prices.
In regards to Application Services Small Business, I cannot comment at this time due to litigation sensitivities. The government was notified of three protests just last week and is still evaluating them.
The first NETCENTS-2 contract, Enterprise Integration and Service Management, was awarded in November 2010. There are three more NETCENTS-2 contract sets we expect to award in 2012 and 2013: Application Services Full and Open (March 2013); Network Operations Full and Open (December 2012); and Network Operations Small Business (March 2013). We are also planning for an IT Professional Support set of contracts that is currently on hold.
To mitigate any potential support gap due to protests or other delays, we are taking steps to extend the original NETCENTS both in ceiling and period of performance.
DS: Please bring me up to date on your major acquisitions/priorities, as well as some efficiencies identified (i.e. data center consolidation, reduction in number of applications, etc.).
Olson: Historically, Air Force efforts to develop and field major combat support information systems have been characterized by significant investments with limited capability being delivered. This realization has brought us to the point of restructuring our enterprise resource planning (ERP) programs to improve effectiveness and efficiency. Previously, three separate program executive officers managed the three Air Force ACAT (Acquisition Category) I ERP programs in the finance, logistics, and personnel domains. These programs included the Defense Enterprise Accounting Management System (DEAMS), Air Force Integrated Personnel and Payment System (AF-IPPS), and the Expeditionary Combat Support System (ECSS). Consolidating the Air Force ERPs into one organization will deliver more positive results, both operationally and fiscally, by allowing the Air Force to take ownership and responsibility for the implementation of these technologies while controlling costs through the use of highly defined work breakdown structures, integrated master schedules and multiple specialized vendors releasing capabilities in agile increments within a government-led ERP implementation methodology. In this new model, software development and infrastructure become commoditized in nature. The individual programs no longer provide their own technical platforms, but rather technical infrastructure is delivered as a standardized, cost-effective service from DISA. Towards that end, we are supporting Air Force Space Command and PEO C3I&N in the migration of Air Force IT capability to a DISA-provided infrastructure. Additionally, commonality of tools, software baselines, acquisition strategies, and processes are being established across the BES Directorate to afford additional efficiencies by reducing the number of applications--a major premise of ERP development. However, this new way of doing business is not without risks. It requires breaking antiquated models and streamlining bureaucracies. Air Force leadership has committed to this new approach by rapidly establishing cross-functional governance boards characterized by the commitment of the members to remove barriers and establish a path for success. DS: Please provide some specifics regarding the delivery of ERP capabilities.
Olson: As far as specific capability delivery, in the financial domain, DEAMS Release 1 supports the processing of Air Mobility Command (AMC) general funds, and Release 2 supports AMC bases using Transportation Working Capital Funds. The Program Management Office is in the final stages of coordinating the Requirements and Design Services request for proposal, and expects OSD to authorize release of the RFP in mid to late July 2012. This contract will support design activities for Releases 3-6, thus laying the foundation for the remainder of Increment 1. In the logistics domain, the ECSS program is undergoing a restructure and awaiting OSD approval of a multi-contract/multi-release acquisition strategy (similar to DEAMS). Approval is anticipated in August 2012. Finally, in the personnel domain, OSD is currently reviewing the AF-IPPS acquisition strategy. Once OSD completes their review, the final RFP is expected to be released by the end of this fiscal year. The initial contract will include blueprinting/conference room piloting for all releases as well as a priced option for the development, testing, fielding, training and support of the military leave processing capability across all components. Two 12-month options for initial support of the leave capability will also be included in the initial contract.
I think it is fair to say our team has a demanding agenda, but the opportunities for more effective and efficient delivery of business capabilities are greater than ever.
DS: The ERP programs command a lot of the attention, so is there a lesser-known program you’d like to highlight. Olson: One that stands out is the Deliberate & Crisis Action Planning & Execution System (DCAPES), which is one of our ACAT III programs that provide a tool for the Air Force to plan and execute major combat operations and military operations other than war. From my perspective, our biggest challenge has been managing the critical dependencies between DCAPES and its two primary strategic partners, the Global Command and Control System - Air Force (GCCS-AF) and the Joint Operations Planning and Execution System (JOPES). To more effectively anticipate and resolve potential show-stopping issues, we have joined forces and created a senior leadership forum that meets on a regular basis to foster greater unity of effort and ensure our mutual success. One of the issues we've historically struggled with is making quick updates to DCAPES in order to remain interoperable when JOPES changes its software. We are now making excellent progress in our effort to loosely couple DCAPES from JOPES. This loose coupling effort will for the first time align authoritative data exposure with current DOD net-centric policy. In addition, DCAPES is developing a strategic plan to deploy significant new warfighter capability by means of an evolutionary acquisition approach based on state-of-the-art agile software development methodologies.
DS: I understand that Air Force promotion board members started using an electronic record scoring system for the first time. How will that work, and how will it streamline the promotion process?
Olson: With the Electronic Board Operations Support System (eBOSS), promotion board members use 135 human system interfaces with ergonomic touch-screen workstations to view and score digital personnel records. Multiple board members will be able to access the same records at the same time to make the board process more efficient. eBOSS is projected to reduce a typical three-week board by two to three days. In addition, the Air Force Personnel Center (AFPC) estimates eBOSS will eliminate more than 235,000 duplicate paper records across the total force, saving the Air Force approximately $2.8 million in annual records maintenance costs once it’s fully implemented in the active duty and reserve components. Another efficiency AFPC has noted is the preparation time to accomplish pre-board activities. Previously, pre-board activities required 30 full-time personnel during a 140-day period. Today, this same activity is now accomplished with only 2 people in 6.5 hours. Finally, promotion boards will now feature workstations with high-definition, touch-screen monitors with a user interface similar to a smart phone or tablet computer. eBOSS will enable board members to navigate, view and score records with a touch of their fingertips to the screen when accessing board-related documents. DS: You mentioned one of your ACAT III programs earlier. Another is the Service Oriented Architecture (SOA) ACAT III program. What do you hope to accomplish with that? Olson: The SOA approach allows a user, particularly in a web-based environment, to more easily react to the inevitable changes required to support a business practice such as human resources (HR). The SOA strategy has been used successfully in the commercial sector, and we are convinced SOA can also improve HR IT services within the Air Force. For example, the data within point-to-point interfaces can become easily outdated, particularly if you have many interfaces to manage. Given the complexity of some of our IT systems and the sheer number of data points we manage today, the currency of this data can become suspect in a hurry. SOA can help this situation by treating these interfaces as a single service, standardizing and updating this service regularly, and by then letting everyone know that this service is the single authoritative data source. Please enable JavaScript to view the comments powered by Disqus. | 科技 |
2016-40/4016/en_head.json.gz/6599 | 15 Aug 2013: Interview
Scientists and Aid Experts Plan for a Warmer Future
Climate scientists and humanitarian relief workers need to collaborate far more closely to prepare for a future of increased extreme weather events. In an interview with Yale Environment 360, Harvard University public health expert Jennifer Leaning analyszes the results of a meeting between these two very different factions. by diane toomeyHarvard University recently sponsored a conference that brought together two groups — climate scientists and humanitarian relief workers — that will undoubtedly be collaborating more closely in the future as natural disasters intensify in a warming world. The woman who was instrumental in opening a dialogue between these two factions was Jennifer Leaning, the director of the FXB Center for Health and Human Rights at the Harvard School of Public Health and associate professor at Harvard Medical School.
Jennifer Leaning A specialist in disaster response and forced migration, Leaning hoped the summit would shed light on the timing and location of the droughts, floods, and other extreme weather events that are likely to worsen as the climate changes. But as Leaning explained in an interview with Yale Environment 360 contributor Diane Toomey, the specific predictions that disaster experts sought were not possible. “The humanitarians found that the questions they were asking were not the ones that the climate scientists were prepared to answer,” said Leaning.
Still, the May conference underscored that the coming decades will bring unprecedented challenges for the disaster response community, which must “ramp up our skill set and our collaborations,” said Leaning. “The most sobering aspect of this climate conference,” she added, “was that this problem has got to be met with a societal-wide response. It won’t just be what Doctors Without Borders or the International Rescue Committee can cope with.”
Yale Environment 360: What was the genesis of the conference and how unusual is it for these two communities to speak to each other?
Jennifer Leaning: The genesis was in conversations that the current director of Harvard Humanitarian Initiative and I had with the director of the Harvard Center for the Environment. The three of us were initially talking about a study on how populations adapted to climate change, with migration being an obvious one. We decided that we would first try to get together representatives from the humanitarian community and see if we could discern the trends in climate change that would have relevance to humanitarian response. We were not at all sure how much overlap there would be [with climate scientists]in terms of thinking, discourse, common understanding. And we found it was actually sometimes somewhat difficult to bring the two sides together into a fruitful conversation. But at the end of a few days it was really very successful.
e360: What were some of the obstacles?
Leaning: The humanitarian community needs to know certain kinds of qualitative assessments with a real amount of scientific certainty so that we can plan for areas that will be more likely to be hit by drought, areas that are going to have serious sea level rise, and areas that are going to be most at risk for very heavy storms. And the climate scientists, based on their deep knowledge of the uncertainties, were somewhat reluctant to get into the qualitative assessments that would be useful for policy. The humanitarians found that the questions they were asking were not the ones that the climate scientists were prepared to answer. What the humanitarian community was hoping to get, probably naïvely, were better estimates that suggest, for example. “This region of the Mekong Delta is going to become untenable because of increased flooding and sea level rise.” We couldn’t get that level of certainty.
e360: That must have been a bit disheartening. And also there must have been a realization that it’s going to be very difficult to fundraise around this issue if it is so fraught with uncertainty.
Leaning: Yes, it was frustrating. On the other hand, I think the humanitarians learned a great deal about climate change. Some climate scientists felt concerned that they could not give the kind of answers they It was sometimes difficult to bring the two sides together into a fruitful conversation.” knew were important to “real world” human beings about security, drought, forced migration, and coastal threat. Their time scale was very different for two reasons. One is they needed more science, better modeling, more information to be able to deliver the kind of certainties we needed. And the timescale on which these changes take place is much greater than the planning horizon of the humanitarians, who are looking at this year, next year, two or three years out.
e360: At the conference Daniel Schrag, the director of the Harvard Center for the Environment, told the disaster relief community, “We’re going to rely on you to deal with the mess that’s coming.” Did a collective chill go down the spines of the audience when they heard that?
Leaning: I must say it was very sobering. The climate scientists had been trying to draw attention to this at international and national governance levels for a considerable amount of time. I think they were glad to have some serious academic and humanitarian leaders in the audience. And there were some good international officials, but primarily from big UN-based agencies, like the World Food Program, in the audience. There was a silent recognition that the absent participants in this discussion are the national governments that are going to have to develop and impose energy policies that will help [mitigate] this problem.
e360: Has there been a sea change in the past few years in the way major relief organizations are figuring climate change disasters into their planning?
Leaning: There has been a sea change, I would say, in the concern and in the attention humanitarian organizations and major international organizations are giving to climate change. And that is because climate change is going to be reflected in the increase in forced migration of people who have to leave an area that is afflicted by drought or flood because they can no longer earn a livelihood in those areas. This issue of forced migration is a major concern of humanitarians. The coastal areas around the world are among the most heavily populated. And those areas that are now affected intermittently by drought are highly likely to be increasingly affected by drought, food insecurity, and inability to maintain agricultural livelihoods.
But we couldn’t pinpoint a threshold a time when certain people will have to move or find other drastic options. So while this conference substantiated that our concern is very well based in science reality, it failed to support the humanitarian community’s need to have more precise numbers, locations, and time horizons.
e360: So where does that leave the humanitarian community? You have said that disaster relief organizations now need to turn their attention to preparedness, to risk reduction. So at this point what does that look like on the ground?
Leaning: I think the answer to that is unfolding. In the longer run, there is going to be recognition that the most serious places to look at now are urban areas because forced migration from terrain that becomes increasingly untenable is inevitably going to lead people to cities.
There has already been a marked increase on the part of the humanitarian community in interest in urban disasters and urban disasters augmented by a potentially great influx of people over the next 20 to 30 years, which is The issue of forced migration is a major concern for humanitarian relief workers.” already starting. So it becomes in many ways an issue of the humanitarian responders having to work much more closely with the development community. How do you make cities viable? How do you improve the water, sanitation, and transport infrastructures? How do you manage the clash of ethnicities as different and diverse people move into these cities? How do you make these cities economic engines that will not just support the current overload of people but the anticipated ones?
e360: It does sound like there needs to be a culture shift within the relief community, which by definition is a reactive one.
Leaning: I think it’s already occurring at the leadership levels of the major humanitarian organizations and international agencies. There is the recognition that the old model of able people managing a refugee camp or helping people who are in flight get to safety and then hunkering down with them and providing shelter, water, sanitation, food, and medical relief… these strategies now have to be embedded as one wing of the humanitarian community. The other wing, which I think is beginning to develop, is how do we help people stay on the land? Where are people going to find it impossible to stay on the land? What are the options for their movement into more viable parts of the world?
This is likely to be part of the planning horizon of humanitarian development communities, noting that some of these areas will probably be able to sustain the populations they have for the next five to 10 years but after that they may not be. And this requires modeling, interaction with local governments and national governments, and increasing dialogue with climate scientists. So at the leadership level of the humanitarian community there is a marked sense that we have to ramp up our skill set and our collaborations with other major entities in world development and planning. We’ve got to be working with water scientists and economists and development experts. And the entire response to these population-based threats and crises cannot be a traditional emergency enterprise that the humanitarian community has [conducted] for the last 40 to 50 years.
e360: Building resilience seems to be a key concept as the humanitarian community thinks about responding to climate change disasters. Define resilience in this context.
Leaning: Resilience means the ability to survive a shock and come back with a capacity to plan, rethink, and build back in an adaptive manner.
e360: It sounds like a very tall order at this point.
Leaning: Well, it is a tall order. Let me pick up two other very interesting ideas that came up in this conference. One is that sometimes we have to look at people who will not move, and may not need to move, if they Resilience means the ability to survive a shock, rethink, and build back in an adaptive manner.” change their livelihoods, change their skill sets. There are many ways to strengthen and make more resilient populations who, right now, are deeply homogeneous in their skill sets and in their relationship to the land or the water. So those fishermen who know nothing else, what are the options to allow them to stay near the water or above the water and not have to move? So that’s why we need to be working with the development communities to see how people can be helped to adapt and become resilient so they can actually stay home if they want to.
And we need to look at communities that are healthy and thriving, say in urban areas or places that are not going to be so affected by climate change, but who may not be very open to having new people come in. How might we, in a careful and thoughtful mode, have the people who want to leave the high-threat areas become prepared to leave, and then leave on their own? So it’s more of a trickle of people moving rather than a sudden onslaught of 200,000 people outside the city gates.
e360: There was a survey done last year of major relief organizations in which more than half of the agencies said that focusing on disaster risk reduction is a good idea. But only a very small percentage of aid organization spending right now is going toward that. Is it just a matter of funds stretched too thinly? Or is it a matter of convincing the powers that be that disaster risk reduction is the way to go at this point?
Leaning: I would begin by saying the humanitarian community is seriously underfunded and understaffed to deal with the traditional crises that we have been involved in. Consequently, for the humanitarian communities to be able to say, “Yes, we are seriously working on disaster risk reduction,” is to ask them to divert funds from what their classic role is, and they can’t.
MORE FROM YALE e360Too Big to Flood? Megacities Face Future of Major Storm RiskAs economic activity and populations continue to expand in coastal urban areas, particularly in Asia, hundreds of trillions of dollars of infrastructure, industrial and office buildings, and homes are increasingly at risk from intensifying storms and rising sea levels.READ MORE We have to look at mitigating strategies so that, as these disasters — whether they’re climate related or not — begin to increase and affect a larger number of people, we can figure out how these populations can be economically supported. The humanitarian community needs to look at property damage, who has insurance, and where is the money going to come from to help people rebuild and recover their livelihoods. The economics of coping with these populations can’t be based on pleas and UN funding from the international community and from ordinary people who are charitable on an ad hoc basis. There needs to be some recognition that a massive disaster fund and insurance fund system should be considered at the regional level or at the international level so that it is not based on appeals every time there’s a major disaster.
e360: When you consider population growth, when you consider failed states, when you consider urbanization, and you lay on top of that the effects of climate change, it does sound like we are headed into a time of perhaps unprecedented levels of disaster.
Leaning: I’d certainly say unprecedented levels of turbulence, both in terms of weather and in terms of population distress. The implications go well beyond the humanitarian community, although you’ll see the humanitarian community there, struggling whenever these events hit ground. The most sobering aspect of this climate conference was that this problem has got to be met with a societal-wide response. It won’t just be what Doctors Without Borders or the International Rescue Committee can cope with.
POSTED ON 15 Aug 2013 IN
Biodiversity Climate Policy & Politics Urbanization North America North America Print Email Recommend Tweet Stumble Upon Share
A decade ago the International Federation of Red Cross and Red Crescent Societies collaborated with the Netherlands Red Cross to establish the Red Cross/Red Crescent Climate Centre
www.climatecentre.org/
They are true pioneers in this field and have accomplished a lot. I didn't see any participants from the Climate Centre on the speakers' list for this conference....don't know if they participated. But it seems odd they weren't called upon to present insights from a decade of experience as practitioners.
Posted by Catherine McMullen on 15 Aug 2013
I don't think this analysis can be done under strictly business-as-usual standards and principles. I think you have to anticipate that our extremely complex, growth dependent and energy intensive economic system may collapse before the worst of the climate change damage occurs. While of course mitigating further damage, this would obviously impact humanitarian aid groups and the ability of non-impacted countries to handle refugees.
Posted by John Dyer on 16 Aug 2013
Comments are moderated and will be reviewed before they are posted to ensure they are on topic, relevant, and not abusive. They may be edited for length and clarity. By filling out this form, you give Yale Environment 360 permission to publish this comment.
Name Email address Comment Please type the text shown in the graphic.
Diane Toomey, who conducted this interview for Yale Environment 360, is an award-winning public radio journalist who has worked at Marketplace, the World Vision Report and Living on Earth, where she was the science editor. She also has reported on science, medicine and the environment for WUNC, the public radio station in Chapel Hill, N.C. MORE BY THIS AUTHOR
El Niño and Climate Change: Wild Weather May Get Wilder This year’s El Niño phenomenon is spawning extreme weather around the planet. Now scientists are working to understand if global warming will lead to more powerful El Niños that will make droughts, floods, snowstorms, and hurricanes more intense. READ MORE
Megadrought in U.S. Southwest: A Bad Omen for Forests Globally Scientists studying a prolonged and severe drought in the southwestern U.S. say that extensive damage done to trees in that region portends what lies in store as other forests worldwide face rising temperatures, diminished rainfall, and devastating fires.
Probing the Reasons Behind The Changing Pace of Warming A consensus is emerging among scientists that the rate of global warming has slowed over the last decade. While they are still examining why, many researchers believe this phenomenon is linked to the heat being absorbed by the world’s oceans.
Can a Divestment Campaign Move the Fossil Fuel Industry? U.S. climate activists have launched a movement to persuade universities, cities, and other groups to sell off their investments in fossil fuel companies. But while the financial impact of such divestment may be limited, the campaign could harm the companies in a critical sphere — public opinion.
A Conservative Who Believes That Climate Change Is Real Republican Bob Inglis’ statement that he believed in human-caused climate change helped cost him his seat in Congress. In a Yale Environment 360 interview, Inglis explains why he is now trying to persuade his fellow conservatives that their principles can help save the planet. READ MORE
MORE IN Interviews
What’s Killing Native Birds in The Mountain Forests of Kauai?
by diane toomey
Biologist Eben Paxton is sounding the alarm about the catastrophic collapse of native bird populations on the Hawaiian island of Kauai. His group's research has uncovered the culprit: disease-carrying mosquitoes that have invaded the birds' mountain habitat.
Exploring How and Why Trees ‘Talk’ to Each Other
Ecologist Suzanne Simard has shown how trees use a network of soil fungi to communicate their needs and aid neighboring plants. Now she’s warning that threats like clear-cutting and climate change could disrupt these critical networks. READ MORE
At Ground Zero for Rising Seas, TV Weatherman Talks Climate
John Morales is part of a new breed of television weather forecasters seeking to educate viewers on climate change and the threat it poses. In South Florida, where sea level rise is already causing periodic flooding, he has a receptive audience.
Unable to Endure Rising Seas, Alaskan Villages Stuck in Limbo
As an advocate for Alaska’s Native communities, Robin Bronen points to a bureaucratic Catch-22 — villages cannot get government support to relocate in the face of climate-induced threats, but they are no longer receiving funds to repair their crumbling infrastructure. READ MORE
Why CO2 'Air Capture' Could Be Key to Slowing Global Warming
by richard schiffman
Physicist Klaus Lackner has long advocated deploying devices that extract carbon dioxide from the atmosphere to combat climate change. Now, as emissions keep soaring, Lackner says in a Yale Environment 360 interview that such “air capture” approaches may be our last best hope.
Bringing Energy Upgrades To the Nation’s Inner Cities
Donnel Baird has launched a startup that aims to revolutionize how small businesses and nonprofits secure funding for energy efficiency and clean energy projects in low-income neighborhoods. In a Yale Environment 360 interview, he talks about how he plans to bring his vision to dozens of U.S. cities.
From Mass Coral Bleaching, A Scientist Looks for Lessons
by katherine bagley
For climate scientist Kim Cobb, this year’s massive bleaching of coral reefs is providing sobering insights into the impacts of global warming. Yale Environment 360 talked with Cobb about the bleaching events and the push to make reefs more resilient to rising temperatures. READ MORE
For James Hansen, the Science Demands Activism on Climate
Climate scientist James Hansen has crossed the classic divide between research and activism. In an interview with Yale Environment 360, he responds to critics and explains why he believes the reality of climate change requires him to speak out. READ MORE
How Ocean Noise Pollution Wreaks Havoc on Marine Life
Marine scientist Christopher Clark has spent his career listening in on what he calls “the song of life” in the world’s oceans. In an interview with Yale Environment 360, he explains how these marine habitats are under assault from extreme—but preventable—noise pollution.
How to Talk About Clean Energy With Conservatives
Angel Garcia, of Young Conservatives for Energy Reform, is working to persuade Republicans about the need for renewable energy. In an interview with Yale Environment 360, he explains why his group avoids mentioning climate change when it makes its pitch to conservatives
RELATED e360 DIGEST ITEMS 30 Sep 2016: Governments Vote to Ban the Sale of World’s Most Trafficked Mammal
29 Sep 2016: Climate Change Could DoubleWildfire Extent in Canada by 2100
27 Sep 2016: Could California’s Gridlock Generate Electricity for the Grid? 26 Sep 2016: Elephants in Africa Suffer Large Declines as Poaching Worsens in the Region
23 Sep 2016: World’s Coffee Supply Threatened by Climate Change, Report Says
22 Sep 2016: What’s Killing the Native Birds in The Mountain Forests of Kauai?
21 Sep 2016: Paris Climate Agreement Moves One Step Closer to Entering Into Force 19 Sep 2016: Arctic Sea Ice Extent in 2016 Ties As Second Lowest in the Satellite Record
16 Sep 2016: New Survey Highlights Recent Widespread Bird Loss in North America
15 Sep 2016: Obama Announces First Marine Protected Area off U.S. East Coast | 科技 |
2016-40/4016/en_head.json.gz/6629 | For other uses, see Navigator (disambiguation).
A navigator is the person on board a ship or aircraft responsible for its navigation.[1] The navigator's primary responsibility is to be aware of ship or aircraft position at all times. Responsibilities include planning the journey, advising the ship's captain or aircraft commander of estimated timing to destinations while en route, and ensuring hazards are avoided. The navigator is in charge of maintaining the aircraft or ship's nautical charts, nautical publications, and navigational equipment, and generally has responsibility for meteorological equipment and communications.
With the advent of GPS, the effort required to accurately determine one's position has decreased by orders of magnitude, so the entire field has experienced a revolutionary transition since the 1990s with traditional navigation tasks being used less frequently.
1 In naval occupations
2 In aviation
3 Nautical charts
4 Nautical publications
5 Mission and passage planning
6 Navigational equipment
In naval occupations[edit]
Shipborne navigators in the U.S. Navy are normally surface warfare officer qualified with the exception of naval aviators and naval flight officers assigned to ship's navigator billets aboard aircraft carriers and large deck amphibious assault ships and who have been qualified at a level equal to surface warfare officers. U.S. Coast Guard officers that are shipboard navigators are normally cutter qualified at a level analogous to the USN officers previously mentioned. Quartermasters are the navigator's enlisted assistants and perform most of the technical navigation duties.
Aboard ships in the Merchant Marine and Merchant Navy, the second mate is generally the (senior) navigator.
In aviation[edit]
Further information: Air navigation
Navigators are sometimes also called 'air navigators' or 'flight navigators'. In civil aviation this was a position on older aircraft, typically between the late-1910s and the 1970s, where separate crew members (sometimes two navigation crew members) were often responsible for an aircraft's flight navigation, including its dead reckoning and celestial navigation, especially when flown over oceans or other large featureless areas where radio navigation aids were not originally available. As sophisticated electronic air navigation aids and universal space-based GPS navigation systems came online, the dedicated Navigator's position was discontinued and its function was assumed by dual-licensed Pilot-Navigators, and still later by the aircraft's primary pilots (Captain and FO), resulting in a continued downsizing in the number of aircrew positions on commercial flights. Modern electronic navigation systems made the civil aviation navigators redundant by the early 1980s.[1]
In military aviation, navigators are still actively trained and licensed in some present day air forces, as electronic navigation aids cannot be assumed to be operational during wartime. In the world's air forces, modern navigators are frequently tasked with weapon systems employment and co-pilot type duties depending on the type, model and series of aircraft. In the U.S. Air Force, the aeronautical rating of navigator has been augmented by addition of the combat systems officer, while in the U.S. Navy and U.S. Marine Corps, those officers formerly called navigators, tactical systems operators, or naval aviation observers have been known as naval flight officers since the mid-1960s. USAF navigators/combat systems officers and USN/USMC naval flight officers must be basic mission qualified in their aircraft, or fly with an instructor navigator or instructor NFO to provide the necessary training for their duties.
Nautical charts[edit]
For more details on this topic, see Nautical charts.
A 1976 United States NOAA chart of part of Puerto Rico
A naval ship's navigator is responsible for buying and maintaining its nautical charts. A nautical chart, or simply "chart", is a graphic representation of a maritime or flight region and adjacent coastal regions. Depending on the scale of the chart, it may show depths of water and heights of land, natural features of the seabed, details of the coastline, navigational hazards, locations of natural and man-made aids to navigation, information on tides and currents, local details of the Earth's magnetic field, restricted flying areas, and man-made structures such as harbors, buildings and bridges. Nautical charts are essential tools for marine navigation; many countries require vessels, especially commercial ships, to carry them. Nautical charting may take the form of charts printed on paper or computerised electronic navigational charts.
The nature of a waterway depicted by a chart changes regularly, and a mariner navigating on an old or uncorrected chart is courting disaster. Every producer of navigational charts also provides a system to inform mariners and aviators of changes that affect the chart. In the United States, chart corrections and notifications of new editions are provided by various governmental agencies by way of Notices to Airmen (NOTAMs), Notice to Mariners, Local Notice to Mariners, Summary of Corrections, and Broadcast Notice to Mariners. Radio broadcasts give advance notice of urgent corrections.
A convenient way to keep track of corrections is with a "chart and publication correction record card" system. Using this system, the navigator does not immediately update every chart in the portfolio when a new Notice to Mariners arrives, instead creating a card for every chart and noting the correction on this card. When the time comes to use the chart, he pulls the chart and chart's card, and makes the indicated corrections on the chart. This system ensures that every chart is properly corrected prior to use. British merchant vessels receive weekly Notices to Mariners issued by the Admiralty. When corrections are received all charts are corrected in the ship's folio and recorded in NP133A (Admiralty Chart Correction Log and Folio Index). This system ensures that all charts are corrected and up to date. In a deep sea vessel with a folio of over three thousand charts this can be a laborious and time-consuming task for the [navigator].
Various and diverse methods exist for the correction of electronic navigational charts.
Nautical publications[edit]
For more details on this topic, see Nautical publications.
This page from a Sailing Directions assists the navigator by providing pictures and descriptions of a harbor approach.
The term nautical publications is used in maritime circles to describe a set of publications, generally published by national governments, for use in safe navigation of ships, boats, and similar vessels.
The nature of waterways described by any given nautical publication changes regularly, and a mariner navigating by use of an old or uncorrected publication is courting disaster. Every producer of nautical publications also provides a system to inform mariners of changes that affect the chart. In the United States, corrections and notifications of new editions are provided by various governmental agencies by way of Notice to Mariners, Local Notice to Mariners, Summary of Corrections, and Broadcast Notice to Mariners. Radio broadcasts give advance notice of urgent corrections. For ensuring that all publications are fully up-to-date, similar methods are employed as for nautical charts. Various and diverse methods exist for the correction of electronic nautical publications.
Mission and passage planning[edit]
For more details on this topic, see passage planning.
The navigator focuses on creating the ship's passage plans (or "mission plans" for USAF purposes). A mission or passage plan can be summarized as a comprehensive, step by step description of how the voyage is to proceed from berth to berth, including unberthing, departure, the en route portion of a voyage, approach, and mooring/arrival at the destination.
Before each voyage begins, the navigator should develop a detailed mental model of how the entire voyage will proceed. In the aviation community, this is known as "chair flying." This mental model includes charting courses, and forecasting weather, tides, and currents. It includes updating and checking aeronautical charts, nautical publications, which could include Sailing Directions and Coast Pilots, and projecting the various future events including landfalls, narrow passages, and course changes that will transpire during the voyage. This mental model becomes the standard by which he will measure progress toward the goal of a safe and efficient voyage, and it is manifested in a written passage plan.
When working in a team environment, the passage/mission plan should be communicated to the navigation team in a pre-voyage conference (USAF term is "mission briefing") in order to ensure that all members of the team share the same mental model of the entire trip.
Passage planning procedures are specified in International Maritime Organization Resolutions, in the laws of IMO signatory countries (for example, Title 33 of the U.S. Code of Federal Regulations), and a number of professional books and USN/USAF publications. There are some fifty elements of a comprehensive passage plan depending on the size and type of vessel, each applicable according to the individual situation.
Modern navigators often enter passage plans on electronic systems.
A good passage plan will include a track line laid out upon the largest-scale charts available which cover the vessel's track. The navigator will draw and redraw the track line until it is safe, efficient, and in line with all applicable laws and regulations. When the track is finished, it is becoming common practice to also enter it into electronic navigation tools such as an Electronic Chart Display and Information System, a chartplotter, or a GPS unit.
Once the voyage has begun the progress of the vessel along its planned route must be monitored. This requires that the ship's position be determined, using standard methods including dead reckoning, radar fixing, celestial navigation, pilotage, and electronic navigation, to include usage of GPS and navigation computer equipment.
Passage planning software, tide and tidal current predictors, celestial navigational calculators, consumables estimators for fuel, oil, water, and stores, and other useful applications.
Navigational equipment[edit]
The navigator is responsible for the maintenance of the ship's navigational equipment. U.S. Air Force navigators are responsible for troubleshooting problems of the navigation equipment while airborne, but the ground Maintenance personnel are ultimately responsible for repair and upkeep of that aircraft's navigation system.
Nautical portal
Wikimedia Commons has media related to Deck officer.
Wikimedia Commons has media related to Navigation.
Aircrew (Flight crew)
Officer of the deck
Nautical chart
Nautical publications
Passage planning
United States Merchant Marine
Second mate
Ship transport
^ a b Grierson, Mike. Aviation History—Demise of the Flight Navigator, FrancoFlyers.org website, October 14, 2008. Retrieved August 31, 2014.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Navigator&oldid=741835448" Categories: Navigation Navigation menu
العربيةБългарскиDanskDeutschEspañolFrançaisGalegoHrvatskiעבריתМокшеньNederlandsNorsk bokmålភាសាខ្មែរPortuguêsRomânăRuna SimiРусскийSvenskaไทยTürkçeУкраїнська中文 Edit links This page was last modified on 29 September 2016, at 23:34. | 科技 |
2016-40/4923/en_head.json.gz/10419 | Ajoutez vos mots-clés ThèmesRégions
Home > IUCN Tiger Specialist Peter Jackson Earns His Stripes
Wed, 15 Jul 2009 IUCN - The World Conservation Union, Gland, Switzerland (07.06.05) - Researchers plan to name the recently identified Malayan tiger, a new subspecies, Panthera tigris jacksoni to honour the career of tiger conservationist Peter Jackson, former Chair and still active member of the Species Survival Commission’s Cat Specialist Group.
As with many threatened species, tigers have been divided into subspecies – natural geographically separate populations – for conservation as well as recognition purposes. The discovery of this new subspecies therefore has major conservation implications and is highlighted in the most recent edition of CAT NEWS, the SSC Cat Specialist Group newsletter.
Tigers historically inhabited much of Asia and may have numbered as many as 100,000 animals as recently as a century ago. Unfortunately, they have declined dramatically since then. Today’s remaining population is estimated to be only 5,000 - 7,000 individuals and three of the traditional eight subspecies, the Javan, Bali and Caspian tigers, became extinct in the mid to late 20th century.
This collapse prompted Peter’s interest in tigers as far back as the 1950s, when as Reuters foreign correspondent in India, he became fascinated by India’s wonderful wildlife. However, it was the IUCN General Assembly in Delhi in 1969, when the crash in India’s tiger population was a dominant issue, that marked a watershed. It was the start of his close involvement in tiger conservation which would lead to his establishment as one of the key players in this field.
Working for WWF International in the 1970s, Peter was closely involved in Operation Tiger to fund conservation action and he worked in tandem with the Indian Government’s programme to establish a network of tiger reserves.
Malayan tiger (P. t. jacksoni) - photo by UF-Malaysia Tiger ProjectLater on, he worked independently to promote tiger conservation and in 1983, was nominated Chair of the SSC Cat Specialist Group, a post he held for 17 years. In his own words, this became his “life’s mission” and naming this newly-identified subspecies after him is a fitting tribute to Peter’s drive and commitment to tiger, indeed all wild cat conservation.
The correct identification of a subspecies is critically important for conservation purposes. This discovery, at the end of last year, is the culmination of a 20-year long study to characterize living tiger populations using a molecular genetic approach and has been published in the online journal Public Library of Science PLoS-Biology.
The results of the study showed that there was strong genetic evidence for four of the five remaining subspecies (Amur, Indochinese, Sumatran and Bengal) whilst the sampling of the South China tiger was very sparse and requires further sampling.
Unexpectedly, the Indochinese tiger showed a distinct separation into two distinct groups as different as the other subspecies are from each other. One group is confined to the Malayan peninsula and the other across the rest of the traditional Indochinese tiger range (see distribution map).
The newly identified subspecies has the common name Malayan tiger, to emphasize its geographical range, and the scientific name Panthera tigris jacksoni, in honour of Peter Jackson’s career and dedication to tiger conservation.
This discovery will have important consequences for tiger conservation and management. Specifically, it would suggest that the new species be recognized and managed as a high priority in Malaysia.
CAT NEWS article (356KB) // PLoS report // Distribution map (281KB)
Andrew McMullin, IUCN Species Programme Communications Officer
Tel: +41 (0)22 999 0153; Email: [email protected] area: SpeciesMammals
IUCN/SSC Cat Specialist Group
UICN
Équipe dirigeante
Responsabilité et valeurs
(pages en anglais)
Bases de données de conservation
Toutes les ressources
Aires protégées
Entreprises et biodiversité
Océans et pôles
Politique globale
Science et savoir
Afrique centrale et de l'ouest
Afrique orientale et australe
Mediterranée
Asie de l'Ouest
Europe de l'est
Mexique, Amérique centrale et Caraibes
Termes du site | 科技 |
2016-40/4923/en_head.json.gz/10428 | QCT Research Group: Home » Table Of Contents
Quantitative and Computational Toxicology Research Group
Biochemical Reaction Network Modeling
Quantitative and Computational Toxicology Research Group¶
Raymond SH Yang is offering a “Physiologically-Based Pharmacokinetic Modeling Workshop for Beginners” at Colorado State University, Fort Collins, Colorado, 04-08 August 2014. See this announcement for more information.
Brad Reisfeld received a Regional Travel Grant from the Fulbright Foundation. With this funding, Dr. Reisfeld will visit the University of New South Wales in Sydney, Australia to explore opportunities for research collaborations and institutional partnerships.
Brad Reisfeld was awarded a US Scholars Grant from the Fulbright Foundation. From June 2013 to May 2014, he will be a Visiting Professor at the Center of Excellence for Environmental Health & Toxicology and the Faculty of Pharmaceutical Sciences at Naresuan University in Phitsanulok, Thailand. During this period, he will be developing and delivering a course, ‘Introduction to Mathematical and Computational Modeling’. He will also be conducting research focused on the development and use of biologically-based models to systematically investigate the disposition of chemicals associated with the cause and treatment of diseases important to the people of Thailand and the surrounding region.
Brad Reisfeld and Arthur Mayeno edited a two-volume book as part of the Humana Press “Methods in Molecular Biology” series entitled “Computational Toxicology”. It is now available from Springer and major bookstores, including amazon.com.
Todd Zurlinden, a graduate student in Brad Reisfeld‘s group, was recently awarded a Science to Achieve Results (STAR) Graduate Student Fellowship from the Environmental Protection Agency. This prestigious award will allow Mr. Zurlinden to pursue research with Dr. Reisfeld in the area of “Environmental Public Health Indicators for Organophosphorus Insecticide Exposure”.
Who We Are¶
The Quantitative and Computation Toxicology (QCT) Research Group at Colorado State University consists of a multidisciplinary team of scientists with an interest in developing and applying rigorous mathematics, computer-based tools, and targeted experimentation to study the effects of toxicants, drugs, and other foreign chemicals on the body.
Members of the group represent a wide variety of disciplines, including toxicology, pathology, biochemistry, chemistry, physics, veterinary medicine, chemical and biological engineering, and biomedical engineering.
Our current members reside at a number of institutions: Colorado State University (Fort Collins, Colorado), The Hamner Institutes for Health Sciences (Research Triangle Park, North Carolina), and the U.S. Environmental Protection Agency (Research Triangle Park, North Carolina).
Owing to their combination of quantitative skills and biomedical research experience, many of our graduates are highly sought after by academia, industry, and government.
What We Do¶
There are a number of areas in toxicology of current interest to the group, including studying the toxicities of volatile organic solvents, examining chemically-induced carcinogenesis, developing methodologies for risk assessment, creating techniques for predictive xenobiotic metabolomics, investigating neuro-developmental toxicology and endocrine disruptors, and utilizing state-of-the-art methods to understand toxicological interactions of chemical mixtures.
Research Areas¶
In general, the focus of research within the group is on developing computational methods and related experimental techniques to study the effects of toxicants, drugs, and other foreign chemicals on the body. Several of the active research areas are listed below.
Xenobiotic Metabolomics
Computational and experimental methods for predicting the metabolite inventory and pathways resulting from xenobiotic exposure.
Carcinogenesis
Research utilizing in vivo, in vitro, and in silico techniques to better understand the complex phenomena involved in chemically-induced carcinogenesis.
Research involving a combination of computational approaches and targeted experimentation to understand and predict the risk associated with chemicals and chemical mixtures.
Chemical Mixtures
Research into the multifaceted toxicological interactions that result from exposure to chemical mixtures.
Computational Tools and Modeling Methodologies
Development and validation of computational tools and approaches in computational toxicology.
If you are interested in joining or collaborating with our group,
please contact Brad Reisfeld.
QCT Research Group: Home » © Copyright 2009, Brad Reisfeld. | 科技 |
2016-40/4923/en_head.json.gz/10501 | MEMO/11/623Brussels, 21 September 2011Research & Innovation: Commission calls for partnerships – frequently asked questions An invitation to public and private actors to join forces at European level to apply research and innovation solutions to major challenges facing society has been issued by the European Commission in a Communication published on 21 September (see IP/11/1059). This MEMO gives some additional information on the role of partnering in research and innovation.What is partnering?Partnering brings together the public sector at European, national and regional levels in public-public partnerships ("P2Ps") as well as the public and private sector in public-private partnerships ("PPPs"). Partnering can help to maximise the contribution of Research and Innovation to achieving smart and sustainable growth in the EU, by making the research and innovation ("R&I") cycle more efficient and shorten the time from research to market. This is essential to achieve the European Research Area (ERA) by 2014 and to deliver on the Innovation Union, the Digital Agenda and other EU 2020 Flagships.What types of partnering approaches exist at the European level?P2Ps align national strategies, helping to overcome fragmentation of the public research effort. They also offer the potential of more efficient interaction with strategic international partners.P2PObjectiveERA-NET100 projects since 2002Coordinate national research programmes in a selected areaERA-NET Plus9 projects since 2007Enhance joint funding by MS and EU in a selected areaArticle 185 Initiatives5 initiatives since 2003 Integrate national and European research programmes in a selected area JPIs10 initiatives since 2008Coordinate / integrate national research programmes to address a societal challengeSET (Strategic Energy Technology) Plansince 2007Accelerate development of low-carbon energy technologies and streamline national research programmes in strategic technology areas at EU levelEurope INNOVA/PRO INNO Europesince 2008Joint policy learning and development of better innovation supportPPPs at European level are undertaken jointly by the EU and other public entities together with private partners to achieve shared objectives. PPPs in R&I aim at strengthening European industrial leadership and are used to leverage R&I investments in a specific area. PPPsObjectiveJTIs5 initiatives since 2007Strengthen European industrial leadership in well defined areasSESARModernise European Air Traffic Management (ATM) Recovery Plan PPPs3 PPPs since 2008Future Internetsince 2011COLIPAsince 2009Recovery Plan PPPs: Maintain and strengthen industry sectors hit by economic crisisFI-PPP: Ensure future Internet development at the service of societyCOLIPA: Help industry comply with EU legislationEuropean Industrial Initiatives (EIIs) under the SET Plan7 EIIs since 2010Address the demonstration/market rollout bottleneck in the innovation chain of low carbon energy technologies Are there any concrete examples of partnering?Examples of Public-Public Partnerships:Information & Communication technologies: The Ambient Assisted Living Initiative (AAL), which gathers 20 Member States and 3 Associated States, is one example of Public-Public Partnerships. It focuses on innovation in support of polices to address demographic change. A total investment of more than €600 million has been mobilised to provide new ICT- based products and services and sustainable care systems for active and independent living of an ageing population. Small and medium sized enterprises constitute more than 40% of the participating entities in this partnership. http://www.aal-europe.eu/ Fighting rare diseases: There are at least 6000 known rare diseases, affecting some 20 million European citizens. The ERA-NET E-Rare has developed a common European programme on rare disease research and launched three joint calls of € 10 million. This, together with FP7 rare diseases related calls, means that up to 40% of public research in this area is now carried out on a coordinated basis. Visit http://www.e-rare.eu/ Environmental protection (Baltic Sea): The Baltic Sea's capacity to provide the goods and services on which people depend has been significantly reduced due to natural and human pressures. To address this, nine countries are contributing to the ERA-NET Plus - BONUS Plus, through a joint call of € 22 million. This action also addresses wider policy aims under the Commission's Marine Strategy and Maritime Policy. Visit http://www.bonusportal.org/ Metrology: Since 2007, the science of measurement has been framed within the multiannual joint programme of the Article 185 Initiative on Metrology (EMRP). With a value of over €400 million, it has significantly reduced the duplication of research effort by pooling 44% of overall metrology resources in one initiative. Visit http://www.emrponline.eu/ Research into Alzheimer's disease: Neurodegenerative diseases are recognised by the MS as a major societal challenge. 23 countries have engaged in the pilot Joint Programming Initiative (JPI) on Neurodegenerative Diseases, including Alzheimer's. The JPI has launched its pilot call with a total budget of € 14 million. Visit http://www.neurodegenerationresearch.eu/ Examples of Public-Private Partnerships:Environmental protection (Air traffic): A good example of Public-Private Partnerships is the Clean Sky Joint Technology Initiative (JTI) which brings together 86 organisations in 16 countries, among which 54 industries, 15 Research Centres and 17 Universities as well as the European Union. It aims to reduce the environmental impact of aviation while safeguarding competitiveness in Europe's aeronautical sector. The long-term nature and inherent high risk of the research involved necessitate public funding and cooperation among key industrial players. To date, investment amounts to almost € 300 million and the first flight tests involving resulting innovative technologies have been carried out. Visit: http://www.cleansky.eu/ Computing & Nanotech research: With public contributions from the EU and participating MS, the ARTEMIS (embedded computing systems) and ENIAC (nanoelectronics) JTIs aim to implement a research agenda defined by industry and academic/research organisations. Up to now, the EU and MS have committed more than € 700 million to innovative collaborative research projects targeting application fields such as health, manufacturing, automotive and energy efficiency. http://www.artemis-ju.eu/ and http://www.eniac.eu Sustainable manufacturing: Under the European Recovery Plan, the Factories of the Future PPP involves a research programme of € 1.2 billion to support the development of new and sustainable manufacturing technologies. It brings together a broad range of industrial stakeholders and aims to transform industrial processes to ensure global competitiveness and leadership. http://ec.europa.eu/research/industrial_technologies/factories-of-the-future_en.htmlWhat is the role of European Innovation Partnerships (EIPs)?European Innovation Partnerships are a new approach to EU research and innovation to speed up innovations addressing the major challenges facing our society today. They are neither a P2P nor a PPP but provide a framework bringing together stakeholders across policy areas, sectors and borders. The goal is to pool resources from EU, national and regional, public and private sources in a way never possible before, directing larger resources towards addressing common problems.Each EIP will tackle a specific societal challenge that is shared across the EU such as energy security, climate change and resource efficiency or health and ageing, and where there is a large new market potential for EU businesses. The pilot EIP on Active & Healthy Ageing (AHA) is intended to test the concept and assess how it can best be implemented. Its target is to increase the average healthy lifespan in the EU by two years by 2020. It seeks to improve the health status and quality of life of European citizens, to support the long-term sustainability and efficiency of health and social care systems; and to enhance the competitiveness of EU industry through business and expansion of new markets. It will focus on applying innovative solutions in areas such as health promotion and prevention; care and cure and independent living and assistive technologies for older people. What are the next steps for partnering at the European level?The Commission's proposals for Horizon 2020, which are due to be adopted by the end of this year, will build on the steps set out in this Communication. The aim is to provide a common set of rules for future EU P2Ps and PPPs in order to simplify participation, cover the full cycle of research and innovation, while leaving the necessary flexibility for individual initiatives to achieve their objectives.In the light of a more extensive experience with the partnering approach in research and innovation, the intention is to launch a strategic exercise to determine where and how the partnering approach can be applied most successfully and the types of initiatives to which the instruments are best suited.See IP/11/1059For more information on Innovation Union, go to:http://ec.europa.eu/research/innovation-union/index_en.cfm For more information on Digital Agenda, go to: http://ec.europa.eu/information_society/digital-agenda/index_en.htmFor more information on Europe 2020, go to: http://ec.europa.eu/europe2020/index_en.htm Side Bar | 科技 |
2016-40/4923/en_head.json.gz/10507 | Maps and the Changing European
View of the World: 1350-1700
(Most map images are courtesy of Henry Davis Consulting at
Henry-Davis.com and the Cartographic Images
site.)
Beginning even before Columbus' voyage in 1492, Europeans extended
their view of the world, and relocated themselves in it to cope with a flood of new
information brought back by merchants and military adventurers. Surviving maps and
navigational charts are excellent sources of information about how they changed their view
of themselves and their world. Orientation and shape are two crucial details of
map-making which we often take for granted. Answer the question "which end is
up?" in a map and you learn something crucial about the map's makers' sense of
direction, and answer the question "what's at the center?" in a map and you
learn something crucial about the map's makers' sense of values. Many medieval maps
were oriented with East at the top, since that's the direction from which strange new
things came (including silk, spices, and the plague). They also tended to situate
the Mediterranean Sea at the center. The corollary question, "what's most
prominently marked?" reveals that Jerusalem and Rome tended to be the most common
prominent human-made features on these early maps. See, for example, the 1323 world map by Pietro
Vesconte. (For most of these maps, a scholarly monograph describing them may be found on
a hyperlink directly below the map, and for this one, an outline of the map oriented
north-up is also provided so you can better identify the Iberian peninsula, the British
isles, a reduced African continent separated by the Red Sea from a reduced Asia. The
fact that north-up orientation makes this so much easier for us is a sign of our own
normative construction of "the world.") This same east-up,
Mediterranean-centered pattern can be found in the 1350 map which accompanied
Ranulf Higden's Polychronicon, an extremely popular world history in manuscript
editions and the first world history printed in English translation by William Caxton
Though many Anglo-European students
are taught that exploration began with Columbus' voyage, navigators from Mediterranean and
Scandinavian kingdoms had been making long speculative trading and settlement voyages for
much of the fifteenth century. One of the Norsemen, perhaps as early as 1440,
produced the famous "Vinland
Map." This clearly delineates the Newfoundland coast, and roughly locates
the St. Lawrence River estuary and Hudson's Bay, though the latter is reduced in scale.
The land mass is more or less correctly situated with respect to Iceland and
Greenland. The British Isles are somewhat reduced in size, and the Azores are
expanded in their location off the coast of Saharan Africa. This unique document's
information about the American continent is not found in any other contemporary map or
chart for nearly sixty years. For instance, the navigational chart
identified in 1924 by Charles de La Roncier as one produced for or used by Columbus (c.
1492?, still debated) shows nothing west of the Azores. Consult the monograph
for a black-and-white outline of the chart, but also note that it did not need to be
re-oriented to a north-up position. Columbus already was thinking about navigation
in the open sea, with a north-pointing magnetic compass and star chart organized around
Polaris, the northern hemisphere's "pole star," as his only means of converting
real-world observations into points on the map. Columbus also was well aware of how
the new technology of printing could improve public awareness of the importance of his
voyage, as he proved with the 1494 publication of the "Columbus Letter," which
you can see at the University
of Southern Maine's site dedicated to it.
The sixteenth century marks the
greatest increase in Anglo-European geographic knowledge for thousands of years, and it
very quickly established the East Coast of the Americas as a navigable region at the
periphery of a Eurocentric world-view. As early as 1507, Martin Waldseem�ller first
used the term "America" to identify, as a single entity, the bulk of the
continent in the vicinity of (perhaps) northern Argentina. Click here for the magnified view
of that portion of the map, but note it represents as a narrow island the mere western
coast of the entire continents of North and South America. Magellan's 1520-1
circumnavigation provided significant new details of the world's shape. The 1526 map by Juan Vespucci,
nephew and heir of Amerigo, represents a vast increase in cartographic detail for the East
and West coasts of the Americas in only twenty years of exploration. Diego Ribero's 1529 map
corrects the curved distortion of the Vespucci map and further increases the level of
detail, producing perhaps the best map of the era. Sebastian Cabot's 1540 world map
depicts the Eastern and Western hemispheres in something approaching the same degree of
proportionate detail. Gerard
Mercator's "Nova Et Aucta Orbis Terrae Descriptio Ad Usum Navigantium emendate . .
." (1569) produces the first world-projection of Mercator's system of correction
for the curvature of the Earth. John Speed's 1627 world map
gives us a combination of Mercator's more accurate rendering of the globe on a
two-dimensional plane with high degree of detail for both hemispheres. The map
illustrates European understandings of the shape of the world, following Mercator's
north-up, Atlantic-ocean-centered projection, at the close of the most rapid growth in
their geographic knowledge. Note that California's Sea of Cortez has not yet been
fully explored, but its great extent has convinced the cartographer California is an
island. Remaining to be discovered are Australia, New Zealand, and the full extent
of the Arctic and Antarctic ice sheets, as well as many of the smaller Pacific island
chains. The Pacific Ocean, the planet's single largest topographical feature,
remains marginalized in Speed's vision, as it is for most 20th-century inhabitants of
England and America. Our Atlantic-centered world remained a constant in history and
in politics until (for some Americans, at least), the bombing of Pearl Harbor by a
Japanese fleet under the command of Admiral Yamamoto and Admiral Nagumo, on December 7, 1941. Residents of Australia and New Zeeland have promoted "south-up" maps to challenge the Northern-Hemisphere-oriented view of the globe. They're good for rethinking your planetary identity, because when one approaches a spinning planet from outer space, there's no necessary "up," but either of the poles can argue equally for authority.
From the mid-seventeenth century,
"discovery" took place within a more nearly closed geographic system, filling in
the "terra incognito" spaces on pre-existing continents. However, at
nearly the same time, the size of the known universe had been expanding due to the
researches of Kepler, Gallileo, and Newton, who helped us understand . The
combination of the two shifts in universal orientation produced something like the one
held by thinkers until the early part of the 20th century when radio-astronomy and
particle physics once again destabilized our sense of place in the universe.
The Milky Way Galaxy from the Southern Hemisphere
Galaxy M81,
a spiral arm galaxy similar to our own.
see galactic center (left image) from 2/3 of the way out on
spiral arm. Click here
for your location.
Click here for "Powers of Ten" (Florida State University), a "vertical" orientation to your place in the universe as a consciousness that is aware of itself as a being operating only within a few orders of magnitude when compared to the orders of magnitude known to modern physics. Before you start, imagine what it would be like to see all of those orders of being simultaneously instead of in "slices." | 科技 |
2016-40/4923/en_head.json.gz/10519 | Tech’s roving R&D man
Frog Design’s Jan Chipchase interviews residents of the world’s hot spots so big business doesn’t have to.
Chipchase, photographed in Tokyo, documents the world for clients.
Jan Chipchase is the Indiana Jones of technology for the developing world. The British-born, Shanghai-based researcher travels the globe, trying to understand how and why the planet’s poorest people would use cellphones and other gadgets. Part cultural anthropologist and explorer, and part designer and entrepreneur, Chipchase uses his findings to develop new products and services that can help improve commerce and life in remote and sometimes dangerous parts of the world, such as Accra, Ghana, or Jalalabad, Afghanistan.
Chipchase, 40, earlier this year joined San Francisco consultancy Frog Design to help the firm better understand the needs of consumers in the developing world — people that Frog’s clients (Disney DIS
, Hewlett-Packard HPQ
, and Dell DELL
, to name a few) are eager to serve. Before leaping to Frog, Chipchase spent more than a decade at Finnish phonemaker Nokia NOK
studying users in far-flung emerging markets. “He was one of the first people to write about the use of airtime as a form of currency,” says Bill Maurer, an anthropology professor at the University of California at Irvine. Chipchase documented Uganda’s sente system, in which villagers transfer money across distances by buying and passing along cellphone minutes. Vodafone VOD
, the U.K.-based mobile-phone operator, later launched a similar money service in Kenya with a local partner.
In 2008, Vodafone partnered with Afghan mobile operator Roshan to launch the service in Afghanistan (where it is called M-Paisa). As the leading expert in this nascent industry, Chipchase wanted to see how it compared with the local hawala system of informal money brokers. So he and a team of researchers spent two weeks in Kabul and Jalalabad asking locals basic questions. “Most of the workers there sew it into their clothes,” Chipchase recalls. “So we asked, ‘If you could carry it on your phone instead, would you?’ ”
Chipchase travels light, keeping his mixed-gender, multilanguage research teams to just two or three people. His attention to cultural details wins him trust more quickly in communities. He passed, for example, when workers offered him tea in Jalalabad during Ramadan, when Muslims don’t eat or drink during the day. And most important, he questions every assumption. “Anyone who tells you he really understands what’s going on is lying to himself,” says Chipchase.
His work may sound professorial, but his employer and its clients are motivated by profits. A cellphone for innumerate consumers that features icons instead of numbers isn’t just largesse, it’s good business — the world is filled with billions of hard-to-reach buyers of technology. Luckily for Frog’s clients, Jan Chipchase knows where to find them.
by Jessi Hempel
November 29, 2010, 8:41 AM EDT
← Kleiner Perkins gets its digital groove back on
5 Business Killers → | 科技 |
2016-40/4923/en_head.json.gz/10594 | How Self-Sustaining Space Habitats Could Save Humanity from Extinction
gizmodoDeadspinGizmodoJalopnikJezebelKotakuLifehackergizmodoHow Self-Sustaining Space Habitats Could Save Humanity from ExtinctionGeorge Dvorsky8/30/12 12:11pmFiled to: FuturismBiospheresSpace Explorationspace ColonizationScienceSciTopFb861EditPromoteDismissUndismissHideShare to KinjaToggle Conversation toolsGo to permalink
This planet can't protect us forever. Sooner or later, there'll be a catastrophe that renders this world uninhabitable for humans. And when that day comes, we'll need to know already how to live in space.
Yesterday, we explained why we should reboot the Biosphere 2 projects of the 1990s. There are a lot of scientific and technological benefits from learning to create self-sustaining habitats — but the biggest reason is because we need to know how to live in space, before we have noplace else to live. Why we should reboot the Biosphere projects
Why we should reboot the Biosphere projects
The last Biosphere 2 project ended 18 years ago. Correction: The failed Biosphere 2 project ended… Read more Read more
There's little question that this is an important area of inquiry. We clearly want to venture out into space, but if we're going to do so, we'll eventually have to lose our dependence on Mother Earth. Colonists won't always be able to rely on a steady stream of supplies from Earth, which means they're eventually going to have to figure it on their own.
Physicist Stephen Hawking suggests that our ongoing efforts to colonize space could ultimately save humanity from extinction. As it stands, Earth is our only biosphere — all our eggs are currently in one basket. If something were to happen to either our planet or our civilization, it would be vital to know that we could sustain a colony somewhere else.
And the threats are real. The possibility of an asteroid impact, nuclear war, a nanotechnological disaster, or severe environmental degradation make the need for off-planet habitation extremely urgent. And given our ambitious future prospects, including the potential for ongoing population growth, we may very well have no choice but to leave the cradle.We're obviously not going to get there overnight — but here's how we could do it.Baby steps
As already noted, the first thing we need to do is develop a fully functional biosphere for long-term human occupation. We still haven't figured out how to do this yet, so it should be at the top of our priority list. We especially need to figure out ways to keep CO2 levels in check, maintain a steady internal temperature, avoid water acidification, and find a way to keep our sanity in check given the close confines.Once this has been done, we can start to think about going into space. The initial structure or set of building materials could be brought up from Earth (either by rocket or space elevator), or we could make it difficult for the astro-biospherians, by making them pull together all their materials from local sources such as asteroids (call it the ‘teach a man to fish approach').
But life in an orbital biosphere will present unique challenges. Growing plants in a zero gravity environment is possible, but difficult (they tend to sprout in bizarre orientations). There's also the problem of prolonged exposure to zero gravity on humans, and the long-term effects of solar radiation.
That said, there are potential solutions to these issues. Back in 1974, physicist Gerard O'Neill outlined a freestanding orbital habitat consisting of large cylinders that would spin along an axis at a rate of one rotation per minute. This would result in a simulation of gravity along its inner surfaces.Initially, these self-sufficient space stations should be kept simple — pilot projects to prove that humans can live off-planet and independent of Earth — an important precedent for any subsequent missions to space, or for colonization efforts to other terrestrial bodies.And indeed, as time passes, these projects will have to assess the viability of more complex and long-term missions. As Ben Austen has warned, we could run into problems such as inbreeding. His solution, however, is to stock our habitats with DNA to expand upon the existing gene pool. More radically, colonists could take advantage of cybernetics, advanced genetic engineering practices, and life extension technologies to overcome these issues as they arise.
What's not known, however, is how long a human offshoot could live in Earth's orbit alone. It's conceivable that a self-sustaining base could function for generations, but that doesn't seem like a reasonable long-term solution for the future of human civilization — particularly if the home planet is inaccessible for whatever reason.But this is why we should also focus our efforts on building closed-loop systems on the Moon, Mars, and beyond.Extraterrestrial but planetary biospheres
Back in 2000, NASA completed a $200 million study called the "Roadmap to Settlement" in which they described the potential for a moon-based colony in which habitats could be constructed several feet beneath the lunar surface (or covered within an existing crater) to protect colonists from high-energy cosmic radiation. They also outlined the construction of an onsite nuclear power plant, solar panel arrays, and a number of methods for extracting carbon, silicon, aluminium and other materials from the surface.
More recently, NASA has also confirmed the presence of water ice on the Moon — a critical ingredient for any self-sustaining colony. Most of it resides at the Moon's north pole, but it's a fair amount — about 600 million tons worth.Assuming that the radiation problem could be addressed, it might also be possible to set up solar-powered farming enclosures. If we could start farming at the lunar North Pole, experts estimate that a 0.5 hectare space farm could feed upwards of 100 people.
At the same time, however, there will be some considerable challenges. The Moon features a long lunar night, which could limit solar power and require a colony to withstand temperature extremes. The Moon is also low in light elements, namely carbon, nitrogen, and hydrogen. Low gravity (at ⅙ of Earth's) could prove to be a long-term problem. The Moon is also completely devoid of an atmosphere, and it has virtually no potential as a future terraforming project. At best, the Moon could serve as a good proof-of-concept station for future projects, or for a short-term stay in the event of a catastrophe on Earth.As NASA's roadmap suggests, a colony on the Moon could help us prepare for a mission to Mars. It would probably be wise to set up, test, and train a self-sustaining colony a little closer to home before we take that massive leap to Mars.And indeed, Mars holds considerably more potential than the Moon. It features a solar day of 24 hours and 39 minutes, and a surface area 28.4% less than Earth's. The Red Planet also has an axial tilt of 25 degrees (compared to the Earth's 29%) resulting in similar seasonal shifts (though they're twice as long given that Mars's year is 1.88 Earth years). And most importantly, Mars has an existing atmosphere, significant mineral diversity (such as ore and nickel-iron), and water. Actually, it has a lot of water. Recent analysis shows that Mars could have as much water underground as Earth.
Whoa, Mars might have as much water underground as Earth does
Whoa, Mars might have as much water underground…
If you're planning on helping to colonize Mars, you'll be thrilled to hear that a recent… Read more Read more
So Mars could provide an excellent place for humanity to test a closed-loop habitat — or to reboot its civilization, in the event of a catastrophe on Earth. Given all that Mars has to offer, it could conceivably support a colony living in enclosed habitats for an indefinitely long period of time. And depending on the technological sophistication of the society in question, it could also go about the long and arduous process of terraforming the planet. Assuming a no-Earth scenario, the colonists would have little choice but to plug away and be patient.
Looker deeper into this scenario, the colonists would eventually have to weigh the pros and cons of their efforts. It might make more sense for them to return to Earth in hopes of salvaging things there — terraforming a broken Earth could prove considerably easier than terraforming Mars. Ultimately, it would depend on the condition of Earth, which could meet a grim fate in any number of ways, including a runaway greenhouse gas effect that could turn it into a Venus-like planet (which could make it much worse than Mars), nanotechnological ecophagy (a grey goo scenario in which self-replicating nanobots have converted virtually everything into useless mush), or an asteroid impact (which would only be a temporary problem).That said, Mars may not be the only terrestrial body in our solar system worth colonizing. Saturn and Jupiter feature a number of moons that could also be considered, though their proximity from the Sun could pose some problems.Finally, there's also the possibility that colonists will want to venture into deep space and find entirely new planets to inhabit — including Earth-like planets that are ready for immediate occupancy. Self-sustainability would have to be a key feature of the expedition, as the colonists would not be able to depend on the Earth for any resources.
And as for knowing where to go, it would be akin to the Pacific Islander colonization campaigns of the past; just pick a direction and hope for the best.Timelines
Predicting timelines for sustainable and permanent off-planet habitability is not easy — mostly because no one is really working on the problem. Most of our efforts assume that Earth will always be there, ready to jump in and support any colony that needs help.
But assuming that we could focus our efforts, it's not unreasonable to assume that we could develop our first self-sufficient biosphere right here on Earth by the end of the 2020s — if not a lot sooner. It's been nearly 20 years since the last Biosphere 2 project, and there's a good chance that today's science and technology can solve many of the problems we experienced during those missions.Once that has (finally) been done, it's entirely possible that self-reliant orbital habitats could be constructed during the 2030s. By that point, our technologies will be advanced enough that any unsolved problem could be addressed by A.I. or sophisticated modeling techniques. At the same time, advanced 3D printers and molecular assemblers (aka "fabs") will make life appreciably easier for colonists working in space.After this stage, the technologies required for setting up closed-loop colonies on the Moon, Mars, or elsewhere would largely be in place. So, assuming no social upheavals or other unpredictable events, we could be capable of living permanently and independently off-planet any time after 2030 or so, and certainly no later than the 2050s.
There is another, albeit more radical, way for us to ensure our ongoing existence in the event of civilizational catastrophe. Assuming that uploads will someday be possible, it would be wise to "backup" human civilization off-planet. This idea was initially proposed by author Vernor Vinge, who suggested that we bury a supercomputer on the moon (or elsewhere) that could house an entire civilization. Alternately, this uploaded civilization could be sent on a mission into deep space in hopes of reviving a new society elsewhere. But given the highly speculative nature of this possibility — and knowing that a disaster can strike at any time — we should continue to work on viable solutions for purely biological humans.All this said, these timelines and predictions assume, of course, that we actually care about building self-sustaining habitats. As history has repeatedly shown, our ability to do something doesn't necessarily mean that we will. But given what's at stake, it's a prospect that needs to be taken a bit more seriously.Images via here, here, here, here, here.Gear from Kinja DealsRazer Takes Only Logical Next Step, Adds RGB Lighting To A Mouse PadThe Best Gaming Chair For Your DeskSaturday's Best Deals: $40 PlayStation Plus, Texture Membership, Anker SpeakerReply86 repliesLeave a reply | 科技 |
2016-40/4923/en_head.json.gz/10690 | California ready to cut greenhouse gases. Next, doing it.
After five years, California has put in place rules to cut greenhouse-gas emissions statewide back to 1990 levels. But lingering effects of the recession have pushed implementation back a year to 2013. By Daniel B. Wood, Staff writer /
A parking structure at the University of California, San Diego, has dual purpose solar panels. Each ‘solar tree’ offsets 13.2 metric tons of carbon per year.
Mike Blake/Reuters/File
California's historic effort to remake global-warming regulations in the United States is at last starting to take off its training wheels.When then-Gov. Arnold Schwarzenegger signed the Global Warming Solutions Act in 2006, his purpose was not only to establish the most rigorous regime of greenhouse-gas emissions reductions in the US, but also to prove to a reluctant nation that the "cap and trade" policies rejected by Washington are not an economic catastrophe waiting to happen.
The success or failure of that law will have national – and even international – ramifications, as other states and Washington itself look to see if California can avoid the doomsday scenarios laid out by critics: lost jobs, higher energy costs, and dubious environmental gains.
Recommended:Are you scientifically literate? Take our quiz
Slowly, the first answers are starting to emerge.The five-year slog has revealed both the surprising willingness of some industries to take part in the process and the dogged determination of others to impede it at virtually every step. The report card is mixed. While all the rules are now in place and there are signs that the law has kindled greater action against global warming statewide, full implementation has been pushed back a year, to 2013, because of the lingering effects of the recession. Lawsuits could also undermine its effectiveness."The state has done the lion's share of the work in spelling out what needs to be done to reach the goals, and it's a pretty amazing story to see how far the auto companies have come in doing their part," says Simon Mui of the Natural Resources Defense Council, an environmental group. "But the oil companies are still dragging their feet and fighting tooth and nail."Specifically, the global warming law, known as AB 32 (Assembly Bill 32), mandates that California reach 1990 levels of greenhouse-gas emissions by 2020 – a 25 percent cut from current levels.To reach that goal, AB 32 essentially provides a stool with four legs: instituting a cap-and-trade program, lowering carbon content in fuel, increasing fuel efficiency in vehicles, and pushing communities to become more energy efficient.The effort to roll out each of the four main components of AB 32 has met with varying degrees of success and resistance. Texas oil companies even tried to repeal AB 32 in its entirety by funding a ballot initiative, which failed. Taken together, the developments of the past five years show the complexity of attempting such sweeping greenhouse-gas regulations all at once.
1. Cap and trade: still ironing out the kinksThe most controversial element of AB 32 is cap and trade – the mechanism that places an annually declining cap on greenhouse-gas emissions and allows polluters to buy and sell carbon allowances. Figuring out exactly how that will work took three years and was generally seen as a rough ride.To ease the fears of those who worry that businesses will flee to other states, California is allowing state regulators to give out nearly half their allowances to polluters free of charge. Moreover, those that are sold will cost $10 per one metric ton (2,200 pounds) of emissions. Environmentalists say handing out free allowances undermines cap and trade, and that $10 is far too cheap. The futures market puts the price tag at $20.Industry advocates, such as the AB 32 Implementation Group, counter that the price is still too high. "The costs that will be incurred by California's regulated industries are alarming," says executive director Shelly Sullivan. "Consumers and taxpayers will ultimately pick up the tab in higher costs for electricity, consumer products, government services, or even college tuition."Cap and trade will work only if other states impose similar emission caps in their economies," she adds.Legal experts have suggested that opponents could take cap and trade to court, arguing that it forces utilities in other states to live by California's rules when selling electricity in the state – a violation of the Constitution's commerce clause, they say.2. Mandating lower-carbon gasolineThe commerce clause, which vests power to govern interstate commerce in the federal government, has already caused problems for AB 32. California's low-carbon fuel standard, the world's first, was enacted the year after AB 32. It aims to reduce the carbon content of gasoline. But it was ruled unconstitutional by a federal district judge on Dec. 29. US District Court Judge Lawrence O'Neill said LCFS discriminates against out-of-state producers.The case has had a ripple effect. Of 18 states that were poised to follow suit, several – including Pennsylvania, New Jersey, Maine, New Hampshire, Washington, and Oregon – have now either backed off or are reconsidering.Like cap and trade, LCFS is at the center of the debate over whether AB 32 is merely symbolic. Oil companies see themselves as trying to keep California from being on the costly forefront of policies whose carbon savings will be easily erased by polluters like India and China.Charles Drevna, president of the American Fuel and Petrochemical Manufacturers, says a nationwide low-carbon fuel standard would worsen air quality because it would prevent American refineries from importing petroleum obtained from oil sands in western Canada. That means the US would have to import more oil in tankers from the Middle East and elsewhere. At the same time, Canadian oil would be shipped in tankers across the Pacific to China and other Asian locations, thereby enlarging the carbon footprint for the oil imported and exported.Supporters say AB 32 is crucial for its role in addressing global warming. "This was the first big step taken to really deal with it," says Nabil Nasr, director of the Center for Integrated Manufacturing Studies at the Rochester Institute of Technology in New York.3. Increasing fuel efficiencyOther aspects of AB 32 have met with more success. On Jan. 27, the California Air Resources Board (CARB) mandated that 15 percent of all new cars sold in the state by 2025 should run with zero or near-zero emissions. The result would be some 1.4 million electric, plug-in hybrid, and hydrogen cars on California roads within 13 years. Today, there are 10,000 such vehicles in the state.In addition, CARB mandated a 50 percent reduction from today's levels of overall auto-emitted greenhouse gases. Domestic car-makers, who are already moving in both of these directions on their own, have been largely supportive of CARB's efforts.4. Urging more sustainable communitiesThe fourth major element of AB 32, the demand for more sustainable communities, has also begun to spread through the state. A law passed in 2008 to meet that goal, Senate Bill 375, calls for California communities to reduce transportation-related greenhouse-gas emissions by better use of roads, more open spaces, and better residential planning. Hundreds of cities and towns are participating, with an emphasis on regional cooperation."When the state came together to figure out how to meet the goals of AB 32 ... one thing that wasn't talked about enough was … how we conceive and build communities in the first place, to minimize the use of driving to work, shopping, and home again," says state Senate majority leader Darrell Steinberg in an online video. "This legislation is being touted in many states and in D.C. as the most significant law to change the direction of urban planning."SB 375 authorized CARB to set greenhouse-gas reduction targets for cars and light trucks by region and allows each region the flexibility to develop its own plan. It also synced the federally required transportation-planning process with the state-mandated housing allocation process."Linking these two processes leads to more holistic planning," says Bill Higgins, executive director of the California Association of Councils of Governments.But the first region out of the gate, San Diego, is facing a lawsuit as a result. Though CARB says the San Diego plan would achieve its greenhouse-gas reduction targets, opponents – this time, environmentalists – claim it would increase sprawl and pollution while not investing enough in public transit.Some cities and businesses, however, are moving ahead on their own, suggesting that AB 32 is having indirect effects.Santa Rosa, north of San Francisco, is redesigning its downtown around rail. The intent is to minimize car travel by keeping workers closer to their jobs and retail. Meanwhile, Bowman Design Group in Signal Hill reduced greenhouse emissions by 65 percent in 2009, replacing company cars with hybrids, improving natural light and ventilation, and consolidating office equipment, among other things. The annual savings is $9,000, and the company – which creates exhibitions for museums and firms – has become a leader in helping the exhibit industry go green.Says chairman Tom Bowman: "AB 32 is an impressive effort, and even though it is being implemented gradually over time it is already delivering benefits to California."
Are you scientifically literate? Take our quiz
Thousands give up carbon for Lent Earth Day: How much do you know about climate change? Take our quiz. | 科技 |
2016-40/4923/en_head.json.gz/10771 | Engaging industry in addressing climate change Making smarter decisions about classroom technologies From engineer to urban planner Professor Emeritus Ali Javan, inventor of the first gas laser, dies at 89 Q&A: How Twitter explains the 2016 election A better way to assay Turning particle detectors into weapons detectors How data can help change the world By Topic
The Terrafugia Transition taking off from runway 17 at Plattsburgh International Airport.Photo courtesy / TerrafugiaFull Screen The Terrafugia Transition shortly after takeoff.Photo courtesy / TerrafugiaFull Screen The Terrafugia Transition flying in formation with the chase aircraft.Photo courtesy / TerrafugiaFull Screen Flying car takes wing
MIT alums' invention makes its first test flights
David Chandler, MIT News Office
A prototype of what is being touted as the world's first practical flying car took to the air for the first time this month, a milestone in a project started four years ago by students in MIT's Department of Aeronautics and Astronautics.
At 7:40 a.m. on March 5, the winged car taxied down a runway in Plattsburgh, N.Y., took off, flew for 37 seconds and landed further down the runway — a maneuver it would repeat about a half dozen times over the next two days. In the coming months the company, a Woburn-based startup called Terrafugia, will test the plane in a series of ever-longer flights and a variety of maneuvers to learn about its handling characteristics.
Above: The first flight of Terrafugia Transition, March 5, 2009.
Aviation enthusiasts have spent nearly a century pursuing the dream of a flying car, but the broader public has tended to view the idea as something of a novelty. Still, such a vehicle could have more practical appeal now that the Federal Aviation Administration has created a new class of plane -- Light Sport Aircraft -- and a new license category just for pilots of such craft, including Terrafugia's two-seater Transition. The "sport pilot" license required to fly the Transition takes only about 20 hours of training time, about half that required to earn a regular pilot's license.
The street-legal Transition is powered on land and in the air by a recently developed 100 hp Rotax engine that gets 30 mpg on the highway using regular unleaded gasoline. As a plane, its 20-gallon tank gives it a 450-mile range with a 115 mph cruising speed. The pilot can switch from one mode to the other from the driver's seat, simultaneously folding up the wings and shifting the engine power from the rear-mounted propeller to the front wheels in about 30 seconds.
Speaking at a March 18 news conference in which the Transition's first test flight was announced, Terrafugia CEO and co-founder Carl Dietrich '99, SM '03, PhD '07 said the FAA rule change and the Transition could help transform the way people move around the country — especially in rural areas. "One of the biggest problems pilots have right now is that most of the 5,000 general aviation airports in the U.S. don't have any car rental facilities, or even a cab stand," he said, noting that the Transition could open many of these underused airports to easier, more practical use by private pilots.
The vehicle may also lead to improved safety. "One of the largest causes of accidents is pilots flying in bad weather," he said. With the Transition, a pilot who spotted bad weather ahead could simply land at the nearest airport, fold up the wings, drive through the weather on local roads, and take off from another airport once past the storm.
The first testing of Terrafugia's car-plane concept took place with a one-fifth scale model in MIT's Wright Brothers Wind Tunnel in 2005, while Dietrich and his wife, Anna Mracek Dietrich '04, SM '06, now the company's COO, and VP of Engineering Samuel Schweighart SM '01, PhD '05, were all students here, as were two of the other company principals.
The full-sized version being tested now is a proof-of-concept vehicle, to be followed later this year by a production prototype. The company is taking deposits now and hopes to start delivering its first Transitions — or "roadable planes," as the company calls them — in late 2011.
Test pilot Phil Meteer, who was at the controls in Plattsburgh, said that the short and simple first flight was both "remarkably unremarkable" and vitally important: "Ninety percent of the risk in the total program comes in the first flight, and now we're past that."
A retired U.S. Air Force colonel, Meteer said the plane handled so smoothly in the test flights that all of the possible contingencies he had practiced became irrelevant. "You're in a hypervigilant state" during the initial takeoff, he said, but as he saw how smoothly the flight was going he had a "wahoo moment: none of this is happening!"
A version of this article appeared in MIT Tech Talk on April 1, 2009 (download PDF).
Topics: Aeronautical and astronautical engineering, Energy, Innovation and Entrepreneurship (I&E), Alumni/ae
Road-worth plane? Or sky-worthy car?MIT Department of Aeronautics and Astronautics About This Website | 科技 |
2016-40/4923/en_head.json.gz/10982 | WATCH: The paralyzed woman who moved a robotic arm with her mind
Gayomali
A motor-degenerative disease has rendered Jan Sheuermann, 53, unable to complete even the most basic daily tasks. First diagnosed with spinocerebellar degeneration in 1996, Sheuermann progressively lost control of her body over time, and is now unable to move her arms or legs. But thanks to two electrical implants attached to her brain, Scheuermann has the ability to feed herself using a remote-controlled robotic arm. "They asked me if there was something special I wanted to do," Sheurmann tells ABC News. "And I said my goal is to feed myself a bar of chocolate."
In this experiment, biomedical engineers at the Pittsburgh School of Medicine and the University of Pittsburgh Medical Center attached 96 electrodes to Scheuermann's brain to read her neural activity. These pulses of electricity were then channeled into a brain-computer interface, or BCI, which allowed her to control the robotic arm seen above using just her thoughts. After 14 weeks of training, doctors are calling Sheurmann's progress and determination "remarkable." The hope is that one day, this type of technology will find its way into everyday home treatments and grant people crippled by spinal cord injuries or brain diseases the ability to move again. "This is the ride of my life," said Sheuermann. "This is the roller coaster. This is skydiving. It's just fabulous, and I'm enjoying every second of it."
More from Chris Gayomali
10 amazing and adorable animals you should follow on Instagram right now
Google says its futuristic contact lens has nothing to do with Glass. That will change.
Why Facebook wants to be more like Twitter
Unions: How they lost their power | 科技 |
2016-40/4923/en_head.json.gz/10998 | Unmanned helicopter missions a step closer By Mike Cronin | Wednesday, July 7, 2010 Email Newsletters
With Sanjiv Singh's help, perhaps a Black Hawk downed won't be so deadly. Technology he developed with a team from Piasecki Aircraft Corp. in Delaware County enabled a full-sized helicopter last month to fly unmanned, choose a landing site in unknown territory and land itself. The unprecedented feat means "actual missions are not far away for unmanned helicopters," said Singh, a Carnegie Mellon University Robotics Institute research professor. "We're not only talking about the capacity to be able to transport troops, but also deliver time-critical medical care to combat casualties and extract those casualties," said John W. Piasecki, president and CEO of Piasecki Aircraft. Autonomous helicopters could save military commanders from sending crews into hostile environments and putting them at severe risk, Piasecki said. The Army paid Piasecki "less than $2 million" over three years to develop the unmanned helicopter, said Michael Beebe, a project manager with the Army's Telemedicine and Advanced Technology Research Center at Fort Detrick, Md. Beebe couldn't say when an autonomous helicopter might be deployed in the field. That decision is made by the "war fighters," he said. In an interview, Piasecki referenced "Black Hawk Down" several times. The book, by journalist Mark Bowden, chronicles a disastrous U.S. mission in Mogadishu, Somalia, in October 1993. During the Battle of Mogadishu, 18 American soldiers were killed in action and 73 others were wounded. Armed residents of Mogadishu shot down two Black Hawk helicopters. Autonomous recovery "is another way of getting our boys back," Piasecki said. "When you talk about combat casualties in Iraq and Afghanistan, this is a way to minimize the risk." Singh has worked on autonomous helicopter navigation for about eight years, he said. During last month's successful test, a 10-meter-long Boeing Co. Unmanned Little Bird helicopter in Mesa, Ariz., flew at a speed faster than 20 knots while maneuvering around a 60-foot-high obstacle. The copter detected high-tension wires and repeatedly demonstrated its ability to land. A computer, a software package and sensors that include a laser scanner, a GPS, gyroscope and accelerometer made it possible for the unmanned helicopter to complete its mission, Singh said. "The helicopter has the ability to locate the casualty, then find a suitable landing site in the vicinity," Singh said. "It plans its approach and makes sure it avoids any obstacles." The technology is advanced enough that it could help human pilots land in low-visibility areas within one to two years, Singh said. The ability to extract the wounded or dead where it's too dangerous or not possible for manned helicopters to land is what most excites Beebe, he said. "We're trying to save lives any way we can." More News | 科技 |
2016-40/4923/en_head.json.gz/11028 | Computer Science Professor David Farber Explains His Opposition to Net Neutrality DOJ Probes IBM on Mainframe Monopoly
AT&T Mobility CEO on Net Neutrality: If It Ain't Broke, Don't Fix It
FCC Chairman Hearts Springsteen, iPhone App for Kids
FCC Chair to Wireless Industry: More Spectrum Coming ... And Net Neutrality Rules
Cecilia KangArchive | RSS Feed
Mike MusgroveArchive | RSS Feed
Latest Tech Headlines
Rob Pegoraro's Fast Forward
Security Fix Blog
The final numbers are in and 60,000 people attended this year's show. It was crowded, but it definitely didn't seem as crowded as last year, which had over 70,000 attendees. You couldn't walk from booth to booth in the South Hall last year. I think one thing that's changed in that time is the accessibility of the show floor to consumers via the Internet. E3Insider.com attracted 1.5 million consumer visits over the three days. You can get all of the press conference videos on that site. And that's just one site. Gamespot.com, IGN.com, Gametrailers.com, MTV.com, G4TV.com and others were streaming feeds and video online, and G4 was broadcasting much of the show live. In many ways, except for the behind-closed-door meetings, consumers now get better access to the games by not being on the crowded show floor than some attendees and journalists who are. From game trailers to developer interviews to live press conferences, E3 is instantly available for anyone for free. That's on top of the daily blogs you've been reading here. It wasn't long ago -- last year really -- that information would be revealed at the end of a press conference. Now blogs update things by the minute and streaming technology allows gamers to watch the action unfold in real time -- and they don't get stuck for an hour in the Sony Pictures parking lot. It's appropriate that technology -- the driving force in gaming -- has opened the annual trade show to anyone with a PC.
Traditionally, the last day of the show is quieter than the first two, as many people bolt early because they're burned out or are trying to get home for the weekend. This year seemed like it was about as busy as yesterday at the noisy South Hall. I spent the day there catching up on games I had missed over the first two days. Sometimes it's nice to just get your hands on the demos out in the booth, rather than getting stuck for 30 minutes on one game behind closed doors. I was impressed by a lot of the third-party games I saw today. I visited Sega, Namco Bandai, Konami, Capcom, D3, NcSoft, Vivendi Games, Sony Online Entertainment, D3 Publishing, Square Enix, 2K Games and Eidos.
I spent some time on Sega's PS3 game, Full Auto 2. The game's still early, so I didn't notice much of a difference between this game and the Xbox 360 original. I enjoyed blowing things up in the original. Konami's Hellboy is yet another cool next gen comic book game - -this one is being done with the help of "Hellboy" movie director Guillermo del Toro. Fans of Dance Dance Revolution will see a ton of new games this year, including Xbox 360, PS2, Xbox, stand-alone devices and mobile phones.
I saw a very cool and original PSP game at D3 Publishing called Dead Head Fred, which has the quirky feel of the old Grim Fandango LucasArts PC game. In this game, your character's head has been stolen, so your corpse goes around decapitating others and using their heads -- each with special powers. The game's campy and innovative gameplay was promising -- just what the PSP needs.
Capcom had two great Xbox 360-exclusive games on display. Dead Rising is basically the Japanese publisher's campy, Mature-rated take on the recent remake of George A Romero's "Dawn of the Dead." The action mostly takes place in a mall overrun by zombies. Another cool game is Lost Planet, which is an action sci-fi title set on a frozen planet inhabited by creatures. Both of these original games seemed to seamlessly blend story with fun arcade action.
I heard some people complaining about the new Madden game. They didn't think the new game's focus on the running game brought enough new to the table. The one bad thing about any monopoly is that it doesn't allow competitors to push to innovate. The NBA and NHL allow such competition, and 2K Games had some very impressive playable Xbox 360 sports games on display. At first glance, these games look like you're watching a TV broadcast. NASCAR fans were left in the dark at the EA booth. There was only a single PSP game on display, and PR reps were tight-lipped when asked if the game was coming to next gen this year. I imagine there will be new games this year, but E3 wasn't the chosen venue to debut them.
Gamers have plenty of good games heading to whichever console or portable they own. And there are more resources out there than ever before to make sure that the games you spend your hard-earned money on are worth the investment. Nintendo takes my vote for most impressive showing this year with its wonderful Wii, followed by Microsoft and its strong Xbox 360 lineup and Xbox Live offerings. Sony comes in third with its expensive PS3 and lack of killer apps.
I'm glad the shows come to a close. Although it was a bit quieter and less crowded than last year, it's still a grueling three days. I'm looking forward to the day when we can experience E3 virtually from the comfort of our homes. That may not very far in the future.
-- John Gaudiosi By Bob Greiner
| Category: E3: Off Screen Previous: Video: Everybody's Golf |
Next: Photo Gallery 2 Add Post I.T. to Your Site
Stay on top of the latest Post I.T. news! This easy-to-use widget is simple to add to your own Web site and will update every time there's a new installment of Post I.T.
I wanted to thank the Post for their coverage of the E3 expo. While it was fun to read the bloggers, the real news coverage (i.e., balanced presentation of facts) was by the legitimate journalists, like those at the Post (and the NYTimes).
Thanks a bunch! Posted by: Ijo | May 14, 2006 4:21 AM The comments to this entry are closed. | 科技 |
2016-40/4923/en_head.json.gz/11067 | Why WildernessOur WildHot IssuesLand and Water Conservation Fund50th Anniversary of The Wilderness ActWhy These IssuesMonument DesignationWilderness DesignationProtecting BLM LandsNational ForestsEnergy & ClimateOutdoor RecreationConservation FundingNotes on Hot IssuesWild PlacesWhy These PlacesAlaska and the ArcticCaliforniaColorado PlateauCrown of the ContinentIdahoNew MexicoNorth CascadesNorthern ForestSouthern AppalachiansOther Places We WorkNotes from Wild PlacesYou Can HelpAdvocates for WildernessAdopt a Wild PlaceTake ActionMake a DonationBecome a Monthly DonorGive in Honor or MemoryPlanned GivingOther Ways to GiveJoin Our NetworkPartner with Us The Plight of the Migrants
Oct 3, 2009Western tanager. Courtesy USFWS.
This feature was first published in the 2009 Wilderness Magazine. To receive the annual magazine and quarterly newsletters from The Wilderness Society, become a member today!
Writer David S. Wilcove is a professor of ecology, evolutionary biology, and public affairs at Princeton University and the author of No Way Home: The Decline of the World’s Great Animal Migrations.
By David S. Wilcove
Sometime in early August, during one of the first chilly nights of the season in Yellowstone National Park, a western tanager will awaken, fly to the top of a lodgepole pine tree, and launch itself into the ink-black sky, thereby beginning the first leg of its annual migration to its winter quarters in western Mexico. In March, as snow piles up inside Yellowstone, herds of bison will leave the park in search of accessible forage.
Yellowstone National Park, like virtually all of our public lands, is filled with migratory animals, including birds, mammals, fish, and insects, a diverse array of creatures employing a diverse array of navigational tricks to reach destinations across the West and across the hemisphere. These animals are, for the most part, driven by opportunism. They take advantage of abundant food and other resources that are present in Yellowstone for only a portion of the year. For the western tanager, the park’s coniferous forests offer a smorgasbord of insects during the spring and summer, more than enough to raise a family. But once the cold weather sets in and the insects disappear, tanagers and other birds must find somewhere else to spend the winter. Similarly, Yellowstone’s lush meadows and grasslands can sustain thousands of bison and elk during the warm months, but heavy snow may eventually render that food inaccessible, forcing them to move to lower elevations inside and outside the park.
In addition to opportunism, one other characteristic unites Yellowstone’s diverse migrants: vulnerability. Logging and farming are destroying the montane forests of Mexico and Central America where western tanagers (and many other birds from the western U.S. and Canada) seek refuge during the winter. The fragile riparian woodlands that serve as crucial rest and refueling stops for them as they pass through the deserts of the Southwest are being degraded by overgrazing and development, while obstacles and dangers of all sorts—from skyscrapers to feral cats—have made the entire route more dangerous.
For bison, the primary enemies are a tiny bacterium and a lot of intolerance. The bacterium Brucella abortus was brought into the U.S. via imported cattle from Europe, and it spread to Yellowstone’s bison a century ago. Brucellosis (as the disease is called) has little effect on bison. Ranchers, however, detest it because it causes some of their cows to abort their fetuses and reduce their milk production. Fear of brucellosis has made Montana’s politicians and agriculture officials determined to keep Yellowstone’s bison away from Montana’s cattle. Unfortunately, a small number of ranchers continue to graze livestock on public and private lands adjacent to the park. So when bison leave the park, as often happens during harsh winters, state and federal officials first try to chase them back, using helicopters, snowmobiles, off-road vehicles, and people on horseback. Those that refuse to return are killed.
Nor are bison the only big mammals in trouble in Yellowstone. Biologist Joel Berger has estimated that over half of the elk migratory routes and three-quarters of the pronghorn routes in the Greater Yellowstone Ecosystem have been destroyed by residential development, oil and gas exploration, and the building of fences and other barriers.
Across the country and across the world, migratory animals are declining as their journeys become increasingly treacherous. This loss is not only aesthetic but ecological. Birds, for example, help to keep populations of defoliating insects in check, thereby reducing damage to our forests and croplands. Bison increase the productivity of grasslands by consuming the older, rank forage and by redistributing nutrients via their dung, all of which benefits other plants and animals, including pronghorn, prairie dogs, and grassland birds. Migratory salmon sustain grizzly bears, bald eagles, and other animals. The list goes on.
“No man is an island, entire of itself,” proclaimed the poet John Donne. The same holds true for our national parks, national forests, wildlife refuges, and other public lands. The well-being of many of the animals found on our public lands depends greatly on what happens on adjacent lands or even in distant countries. Saving these animals will require greater coordination among individuals, agencies, and nations, combined with a commitment to protecting them while they are still common. Migration is fundamentally a phenomenon of abundance. If we wait until these species are close to extinction, we will have lost both the glory and the ecological value of migration.
photos: Western tanagers head south after a summer of feasting on insects in the Northern Rockies, some flying as far south as Mexico. Courtesy USFWS. Evidence suggests that over half of the elk migratory routes and three-quarters of the pronghorn routes in the Greater Yellostone Ecosystem have been destroyed by residential development, oil and gas exploration, and the building of fences and other barriers. Courtesy BLM. Comments
Chronic under-funding has left our nation's trail system in poor shape. Support a bill that would change that. Take action The Wilderness Society | 科技 |
2016-40/4923/en_head.json.gz/11187 | A Blue View: World Water Day
by John Racanelli, Chief Executive Officer
A Blue View is a weekly perspective on the life aquatic, hosted by National Aquarium CEO John Racanelli.
From the smallest plants and animals invisible to the human eye to entire ecosystems, every living thing depends on and is intricately linked by water.
Tune in to 88.1 WYPR every Tuesday at 5:45 p.m. as John brings to the surface important issues and fascinating discoveries making waves in the world today.
March 20, 2013: The Streams of Maryland
Click here to listen to John discuss the important role freshwater plays in the survival of all living things!
Held annually on March 22, the United Nation’s World Water Day brings attention to the importance of freshwater and advocates for the sustainable management of freshwater resources. Globally, freshwater accessibility is critical for the survival of all living things, yet it is a significantly threatened resource. In Maryland, our own freshwater streams and rivers need our help as they run to the largest estuary in the United States, the Chesapeake Bay.
Even if you don’t live on the water, the health of the Chesapeake Bay watershed, which encompasses more than 64,000 square miles to six states and the District of Columbia, affects each of us every day. More than 100,000 streams, creeks, and rivers weave through the Chesapeake’s vast watershed. In fact, according to the Maryland Department of Natural Resources, we all live within 15 minutes of a stream, making freshwater health not just a Maryland issue, but a backyard issue as well!
Healthy streams are organically balanced, with enough oxygen to support life. Decaying plants and animal waste provide a balanced amount of nutrients, and the water is not too acid or too alkaline. In these healthy streams, runoff is kept to a minimum, and chemicals from farms, factories, and residential areas do not make their way into the stream. Countless species rely on healthy freshwater ecosystems to thrive. Fish, snakes, turtles, frogs, invertebrates…DNR states that Maryland is home to more than 100 species of fish, 20 species of salamander, and 10 species of turtle, just to name a few stream-dwellers.
The diamondback terrapin is just one of the many species of reptiles that rely on Maryland waterways!
In a recent assessment by the Environmental Protection Agency (EPA), just 45 percent of sampled streams in the Chesapeake Bay watershed were rated fair, good, or excellent. As outlined in the EPA’s Strategy for Protecting and Restoring the Chesapeake Bay Watershed, the goal is to improve the health of the watershed so that 70 percent of sampled streams measure fair or better by 2025.
To help increase our understanding of stream health, DNR coordinates a team of volunteers who collect important stream quality data across the state. This program, called Stream Waders, is the volunteer component of the Maryland Biological Stream Survey. The use of these volunteers allows more streams to be sampled, giving a big-picture view of Maryland’s waterways. Volunteers participate in a one-day training session, then spend a couple days in March or April collecting aquatic invertebrate samples from stream beds.
The study of aquatic invertebrates, such as mayflies, caddisflies, and dragonflies, is instrumental in the analysis of streams. Because invertebrates vary in their sensitivity to pollutants, a healthy stream has both sensitive and tolerant invertebrate species while an unhealthy one would have only pollution-tolerant species. Ultimately, the Stream Waders data is used in DNR reports and is available for review on their website.
In our daily lives, each of us can take steps to help keep our community streams healthy. Take a walk along a nearby stream and properly dispose of trash you find along its banks. Limit pesticide use in your yard so that it doesn't make its way into freshwater supplies. Many local organizations host stream cleanups or wetland restoration events, so volunteer your time. Even just one day a year can make a real difference to a stream in your community.
Take action to keep our streams today by joining our Conservation team at one of our upcoming cleanups! Topics:
a-blue-view,
chesapeake-bay,
chesapeake-bay-conservation,
freshwater,
freshwater-conservation,
maryland-streams,
national-public-radio,
united-nations,
world-water-day,
wypr
Behind-The-Scenes,
The Atlantic's First Marine Monument!
Living Seashore Wins Top Honors!
Animal Update: Pygmy Angelfish, Fantail Darter and Kole Tang
Sheldon is Ready for Release!
John Racanelli
About John Racanelli View all
posts by John | 科技 |
2016-40/4923/en_head.json.gz/11200 | You are here: ASDNews Home > Orbital Set to Launch NASA's LADEE Lunar Orbiter Aboard Minotaur V Rocket
Print Email Orbital Set to Launch NASA's LADEE Lunar Orbiter Aboard Minotaur V RocketInaugural Launch of Minotaur V to Deploy 45th Spacecraft by Orbital's Minotaur Product Line
Launch to Occur from NASA's Wallops Flight Facility in Virginia on Friday click to enlarge
Orbital Sciences Corporation (NYSE: ORB), one of the world’s leading space technology companies, announced today that it is in final preparations for the launch of NASA’s Lunar Atmosphere and Dust Environment Explorer (LADEE) spacecraft aboard its new Minotaur V rocket. The vehicle is scheduled to be launched from NASA’s Wallops Flight Facility in eastern Virginia on Friday, September 6, 2013 at approximately 11:27 p.m. EDT. LADEE will mark the first lunar launch from the NASA Wallops facility and Orbital’s first Minotaur rocket launch of a payload that will travel beyond low-Earth orbit.
The Minotaur V is a five-stage space launch vehicle designed, built and operated by Orbital for the U.S. Air Force. It uses three decommissioned Peacekeeper government-supplied booster stages that Orbital combines with commercial motors for the upper two stages to produce a low-cost rocket for launching smaller spacecraft into low-Earth orbit and higher-energy trajectories, such as the trans-lunar flight of the LADEE mission.
Related Research on ASDReports.com: Disruptive Satellite Communication in the Automotive Industry
“The Minotaur V design builds on Orbital’s proven systems engineering, production, test and flight operations supporting the Minotaur family of rockets, creating another cost-effective launch alternative for U.S. government space missions,” said Mr. Lou Amorosi, Senior Vice President of Orbital’s Small Space Launch Vehicle business. “We look forward to the successful launch of the LADEE orbiter and the opportunity to continue supporting NASA in its exploration and science missions.”
Under the Orbital/Suborbital Program (OSP) contract, which is managed by the U.S. Air Force Space and Missile Systems Center (SMC), Space Development and Test Directorate (SMC/SD) Launch Systems Division (SMC/SDL) located at Kirtland Air Force Base, New Mexico, Orbital designs, integrates, tests and provides launch services to orbit with the Minotaur I, IV, V and VI rockets, as well as other suborbital capabilities with the Minotaur II and III configurations. The company has launched a total of 23 Minotaur rockets with a 100% success record dating back to January 2000.
Employing a combination of U.S. government-supplied rocket motors and Orbital’s proven commercial launch technologies, the Minotaur family of launchers provides reliable and low-cost access to space for government-sponsored payloads. The rockets are specifically designed to be capable of launching from all major U.S. spaceports, including government and commercial launch sites in Alaska, California, Virginia and Florida. Orbital’s use of standardized avionics and subsystems, mature processes and experienced personnel make Minotaur rockets both reliable and cost-effective for U.S. government customers.
In addition to the Minotaur V rocket, the product line includes:
Minotaur I - The initial member of the Minotaur family, the Minotaur I is a four-stage space launch configuration that can place up to 1,300 lbs. into low-Earth orbit. It was originally launched in January 2000 and has conducted a total of 10 successful launches to date.
Minotaur II - A three-stage suborbital rocket, the Minotaur II is used as a target vehicle for testing U.S. missile defense systems and related missions. This configuration has performed eight successful launches to date.
Minotaur III - A three-stage suborbital rocket, Minotaur III can deliver suborbital technology demonstration payloads of up to 6,500 lbs. or serve as a target vehicle for testing U.S. missile defense systems and similar missions.
Minotaur IV - A heavier-lift four-stage space launch vehicle using retired Peacekeeper rocket motors, the Minotaur IV is capable of launching satellites weighing up to 3,800 lbs. into low-Earth orbit. Five successful launches have been conducted with this configuration.
Minotaur VI - An evolutionary version of the flight-proven Minotaur IV, the Minotaur VI provides a highly-capable and cost-effective launcher for U.S. Government-sponsored spacecraft of up to 7,000 lbs. into low-Earth orbit. The combination of four government-furnished solid rocket stages, a commercial solid rocket upper stage, and Orbital’s flight-proven systems and processes provide unmatched value and performance.
Source : Orbital Sciences Corp. Print Email Published on ASDNews: Sep 6, 2013 More News from Orbital Sciences Corp.
Orbital Successfully Launches Cygnus Spacecraft Aboard Antar...
Orbital Launches 40th Coyote Supersonic Ramjet-Powered Targe...
Orbital Successfully Launches NASA's LADEE Spacecraft Aboard...
Orbital Successfully Launches NASA's IRIS Satellite Aboard P...
Orbital Selected by NASA for $50 M Contract to Build Icon Sp...
Click here for more Orbital Sciences Corp. News
Global Satellite Market: Trends & Opportunities (2015-20)
Publication date: Jan 2016
TV and Radio Broadcasting Global Market Briefing 2016
Publication date: Jun 2016
Global Mobile Satellite Services Sales Market Report 2021
Publication date: Jul 2016 | 科技 |
2016-40/4923/en_head.json.gz/11205 | Home/News/Deep Impact spacecraft eyes Comet ISON
Deep Impact spacecraft eyes Comet ISONThe Comet ISON imaging campaign is expected to yield infrared data and light curves, which are used in defining the comet’s rotation rate, in addition to visible-light images.
By Jet Propulsion Laboratory, Pasadena, California, NASA Headquarters, Washington, D.C. | Published: Thursday, February 7, 2013
RELATED TOPICS: SOLAR SYSTEM | COMETS | COMET ISONThis image of Comet ISON (C/2012 S1) was taken by the Medium-Resolution Imager of NASA's Deep Impact spacecraft.NASA’s Deep Impact spacecraft has acquired its first images of Comet ISON (C/2012 S1). The spacecraft’s Medium-Resolution Imager took the images over a 36-hour period January 17–18, 2013, from a distance of 493 million miles (793 million kilometers). Many scientists anticipate a bright future for Comet ISON; the spaceborne conglomeration of dust and ice may put on quite a show as it passes through the inner solar system this fall.“This is the fourth comet on which we have performed science observations and the farthest point from Earth from which we’ve tried to transmit data on a comet,” said Tim Larson of NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California. “The distance limits our bandwidth, so it’s a little like communicating through a modem after being used to DSL. But we’re going to coordinate our science collection and playback so we maximize our return on this potentially spectacular comet.”Deep Impact has executed close flybys of two comets — Tempel 1 and Hartley 2 — and performed scientific observations on two more — Comet Garradd and now ISON. The Comet ISON imaging campaign is expected to yield infrared data and light curves, which are used in defining the comet’s rotation rate, in addition to visible-light images. A movie of Comet ISON was generated from initial data acquired during this campaign. Preliminary results indicate that, although the comet is still in the outer solar system, more than 474 million miles (763 million km) from the Sun, it is already active. As of January 18, the tail extending from ISON’s nucleus was already more than 40,000 miles (64,400km) long.Long-period comets like ISON are thought to arrive from the solar system’s Oort Cloud, a giant spherical cloud of icy bodies surrounding our solar system so far away that its outer edge is about a third of the way to the nearest star other than our Sun. Every once in a while, one of these loose conglomerations of ice, rock, dust, and organic compounds is disturbed out of its established orbit in the Oort Cloud by a passing star or the combined gravitational effects of the stars in the Milky Way Galaxy. With these gravitational nudges, so begins a comet’s eons-long, arching plunge toward the inner solar system.Two Russian astronomers using the International Scientific Optical Network’s 16-inch (40 centimeters) telescope near Kislovodsk discovered Comet ISON on September 21, 2012. NASA’s Near-Earth Object Program Office, based at JPL, has plotted its orbit and determined that the comet is more than likely making its first-ever sweep through the inner solar system. Having not come this way before means the comet’s pristine surface has a higher probability of being laden with volatile material just waiting for some of the Sun’s energy to heat it up and help it escape. With the exodus of these clean ices could come a boatload of dust, held in check since the beginnings of our solar system. This released gas and dust is what is seen on Earth as comprising a comet’s atmosphere — coma — and tail.Comet ISON will not be a threat to Earth, getting no closer to Earth than about 40 million miles (64 million km) December 26, 2013. But stargazers will have an opportunity to view the comet’s head and tail before and after its closest approach to the Sun if the comet doesn’t fade early or break up before reaching our star.
RELATED ARTICLESAmateur and professional astronomers team up to reveal a spiral galaxy with a secretEnigmatic "ribbon" of energy discovered by NASA satellite explainedEarth-like planets are right next doorMassive stellar winds are made of tiny piecesDawn maps Ceres craters where ice can accumulateHubble captures vivid aurorae in Jupiter’s atmosphereHow can astronomers be sure that Pluto is larger than Eris? Could we get a different number for Eris' size once it's visited by a spacecraft?Are eight planets enough?Why haven't we sent a microphone to Mars, or any other planet for that matter? | 科技 |
2016-40/4923/en_head.json.gz/11206 | Home/News/Pan-STARRS telescope spots new distant comet
Pan-STARRS telescope spots new distant comet
A preliminary orbit shows that the comet will come within about 30 million miles (50 million kilometers) of the Sun in early 2013.
By University of Hawaii at Manoa's Institute for Astronomy, Honolulu | Published: Monday, June 20, 2011
Astronomers at the University of Hawaii at Manoa have discovered a new comet that they expect will be visible to the naked eye in early 2013.
Originally found by the Pan-STARRS 1 telescope on Haleakala, Maui, on the night of June 5/6, UH astronomer Richard Wainscoat and graduate student Marco Micheli confirmed it as a comet the following night using the Canada-France-Hawaii Telescope on Mauna Kea.
A preliminary orbit computed by the Minor Planet Center in Cambridge, Massachusetts, shows that the comet will come within about 30 million miles (50 million kilometers) of the Sun in early 2013, about the same distance as Mercury. The comet will pose no danger to Earth.
Wainscoat said, “The comet has an orbit that is close to parabolic, meaning that this may be the first time it will ever come close to the Sun, and that it may never return.”
The comet is now about 700 million miles (1.2 billion km) from the Sun, placing it beyond the orbit of Jupiter. It is currently too faint to be visible without a telescope with a sensitive electronic detector.
The comet is expected to be brightest in February or March 2013, when it makes its closest approach to the Sun. At that time, the comet is expected to be visible low in the western sky after sunset, but the bright twilight sky may make it difficult to view.
Over the next few months, astronomers will continue to study the comet, which will allow better predictions of how bright it will eventually get. Wainscoat and UH astronomer Henry Hsieh cautioned that predicting the brightness of comets is notoriously difficult, with numerous past comets failing to reach their expected brightness.
Making brightness predictions for new comets is difficult because astronomers do not know how much ice they contain. Because sublimation of ice (conversion from solid to gas) is the source of cometary activity and a major contributor to a comet’s overall eventual brightness, this means that more accurate brightness predictions will not be possible until the comet becomes more active as it approaches the Sun and astronomers get a better idea of how icy it is.
The comet is named C/2011 L4 (PANSTARRS). Comets are usually named after their discoverers, but in this case, because a large team, including observers, computer scientists, and astronomers, was involved, the comet is named after the telescope.
C/2011 L4 (PANSTARRS) most likely originated in the Oort Cloud, a cloud of comet-like objects located in the distant outer solar system. It was probably gravitationally disturbed by a distant passing star, sending it on a long journey toward the Sun.
Comets like C/2011 L4 (PANSTARRS) offer astronomers a rare opportunity to look at pristine material left over from the early formation of the solar system.
The comet was found while searching the sky for potentially hazardous asteroids — ones that may someday hit Earth. Software engineer Larry Denneau, with help from Wainscoat and astronomers Robert Jedicke, Mikael Granvik, and Tommy Grav, designed software that searches each image taken by the Pan-STARRS 1 telescope for moving objects. Denneau, Hsieh, and UH astronomer Jan Kleyna also wrote other software that searches the moving objects for comets’ telltale fuzzy appearance.
The comet was identified by this automated software.The Pan-STARRS 1 telescope on Haleakala, Maui, found a new distant comet, designated C/2011 L4, on the night of June 5/6.Henry Hsieh, PS1SC
The comet was identified by this automated software.0JOIN THE DISCUSSION
RELATED ARTICLESJapanese T2K experiment sees evidence of a new type of neutrino "flavor change"EPOXI finds 103P/Hartley is a hyperactive cometGamma-ray flash came from star being eaten by black holeHubble captures spectacular view of Centaurus ADeep mysteries lurk below (and even above) Mercury’s surfaceDevon Island: The last stop before MarsAstronomers spy a “stellar cocoon” outside the Milky Way for the first timeThe comet probe Rosetta is set for its grand finale on FridayHow much would it cost to live on the Moon? YOU MIGHT ALSO LIKE100 Most Spectacular Sky WondersDeep Space Mysteries 2017 CalendarThe New Cosmos (By David Eicher)Explore Jupiter's moons PosterYour Guide to the 2017 Total Solar EclipseCosmology's Greatest DiscoveriesCosmic OriginsCelestial Portraits Download | 科技 |
2016-40/4923/en_head.json.gz/11220 | BABW NewsNews and information
Breakthrough: German atomic clock may redefine the second February 11, 2016 By Sam Catherman A team of German physicists has made a huge breakthrough. According to a report from Tech Times, researchers have developed a new optical single-ion clock that has taken the title of the world’s most accurate clock.
How does the clock work? The timepiece measures the vibrational frequency of ytterbium ions trapped within a network of laser beams as they swing back and forth hundreds of trillions of times each second. Measuring the number of “ticks,” or vibrations of the ytterbium ions is so accurate, physicists predict that the single-ion clock won’t lose time for more than a billion years.
Atomic clock’s aren’t a new thing – the previous record holder for the world’s most accurate clock was made of caesium. Atoms excited by microwave radiation provide the motion by which the devices measure subdivisions of time. The official definition of a second was based on the calculations of caesium clocks. A second, according to some of the world’s previously leading atomic clocks, is defined as 9.192,631,770 cycles of the transition between two different ions of a caesium atom. While this is pretty close, scientists still thought they could do better.
Researchers at Germany’s Physikalisch-Technische Bundesanstalt (PTB) have built a new atomic clock that is nearly 100 times more accurate than the leading caesium clocks. “It is regarded as certain that a future redefinition of the SI (International System of Units) second will be based on an optical atomic clock,” the study said.
The difference between this new optical atomic clock and older caesium clocks is in the high excitation frequency of up to 1,000,000,000,000,000 Hz. The newly developed clock is much more stable and thus much more accurate than a caesium clock.
The theory of atomic clocks was developed in the 1980’s by Nobel Prize winner Hans Dehmelt. Many atomic clocks have been built since then, but the new clock from PTB is the first to achieve a level of accuracy only believed to exist on paper.
The study was published in the journal Physical Review Letters. A press release describing the details of the clock’s development can be found here.
Filed Under: Front Page, Science Leave a Reply Cancel reply Your email address will not be published. Required fields are marked *Comment Name * Email * Website Search:
Menopausal hot flushes linked to depression in women
New study: Has the mystery of Crohn’s disease been solved?
Comcast and the YMCA partner to increase digital literacy
Surprising unintended consequence for LED lighting
Copyright © 2016 Jones Kilmartin Group, LLC · Metro Pro Theme On Genesis Framework · WordPress | 科技 |
2016-40/4923/en_head.json.gz/11318 | China has Antarctica in its sights
John Garnaut
CHINA plans to extend its reach into Antarctica - by building a new ice-breaker ship, purchasing a plane and helicopters and upgrading its base into a year-round facility - in line with its rapidly expanding global profile.The head of China's Antarctic program, Wei Wenliang, says the new Antarctic transport capability will enable Chinese scientists to live year-round at an expanded and upgraded ''Kunlun'' base, located at the centre of Antarctica on a 4000-metre high ice plateau.
Wei Wenliang, head of China's Antarctic program. Conditions there are exceptionally harsh, with the minimum temperature last month dropping to minus 78 degrees.In a rare interview with foreign media, Mr Wei, the Communist Party chief of the Chinese Arctic and Antarctic Administration, said the increase in exploration and scientific capability meant China was now ready to ''shoulder responsibility'' in administering the region.A new paper by Anne-Marie Brady, from the University of Canterbury in New Zealand, details how China's scholarly papers and state-controlled media discuss Antarctica in terms that are ''virtually taboo'' in the West.''Chinese language polar social science discussions are dominated by debates about resources and how China might gain its share,'' says the paper, China's Rise in Antarctica?, in the forthcoming Asian Survey journal.
However, Mr Wei said China's program does not include exploring for or exploiting the continent's untapped mineral wealth or making a territorial claim, although he ''understands the misunderstandings'' about China's intentions.He said the camp on the Dome A ice plateau is the world's best location for astronomy and studying climate history.Mr Wei also revealed his scientists have drilled hundreds of metres into what will be a 1000-metre-deep ice core sample - which will shed light on 1 million years of climate history and assist with projections of global warming - and they plan to drill further for rock samples below.''If possible we would also like to extract rocks under the ice,'' he said. ''After drilling the ice core we will use the same hole, place a different head on the drill, and go deeper into the rock.''China's Antarctic program plays strongly into patriotic propaganda at home, with journalists from state-run television, radio and newspapers accompanying most expeditions.''China's polar activities are as much about national prestige as they are about the science,'' said Dr Brady, who added that China's role in exploration and governance was ''very positive''.Mr Wei's agency is a division of China's State Oceanic Administration, which last week trumpeted the fact that one of its submersibles plunged 3750 metres to plant a Chinese flag on the seabed of the disputed South China Sea.The Oceanic Administration, in turn, reports to the Ministry of Land and Resources. Mr Wei said that bureaucratic division did not imply anything about seeking land or resources in Antarctica.Dr Tony Press, who led Australia's Antarctic program for a decade until 2008, said there was no evidence that China was interested in breaching prohibitions on exploring and exploiting minerals in Antarctica. In any case, he said exploiting mineral resources on the continent was not ''physically'' possible.Dr Press said China's focus on the ice core was partly ''symbolic'' - at close to the highest point in Antarctica and drilling deeper than any other nation - but it could also be scientifically ''significant''.''It has the potential to push down through more than 1 million years of climate history to a time when climate cycles were much shorter than today,'' said Dr Press, who now heads the Antarctic Climate and Ecosystems Co-operative Research Centre at the University of Tasmania.China's massive expansion in Antarctic exploration is in line with its rapidly expanding impact on global affairs.Mr Wei said China had budgeted to spend 780 million yuan ($125.8 million) in the two ''five-year plans'' ending next year, before a sharp increase in the next five-year plan to be approved at a Communist Party meeting in October.The new ice-breaker will have the capacity to carry 60 scientists and 8000 tonnes of equipment and break through ice as thick as 1.5 metres. Mr Wei said the ship will be delivered in 2013 or 2014 and will be closely followed by new aircraft, which he said could be used co-operatively with Australia.Chinese and Australian scientists have co-operated since the early 1980s, when an Australian team hosted the first Chinese scientist to visit Antarctica. Both the weather station and the telescope at Kunlun station were provided by Australia.''The Australians are good friends with both sides, we're the neutral party in all of this, we've helped out the Chinese and the Americans,'' said Dr Press. | 科技 |
2016-40/4923/en_head.json.gz/11415 | EPA assesses link to greenhouse gases
Steps taken during manufacturing can significantly impact landfill emissions, advises Lynn Bergeson, regulatory editor, in this month's Compliance Advisor column.
By Lynn Bergeson, regulatory editor
Climate change is caused by many activities, including waste disposal. The U.S. Environmental Protection Agency (EPA) issued an important life-cycle assessment of greenhouse gases (GHG) and solid waste management. The document, “Solid Waste Management and Greenhouse Gases — A Life-Cycle Assessment of Emissions and Sinks,” is included in the third edition of EPA’s “Greenhouse Gas Emissions from Management of Selected Materials in Municipal Solid Waste.”
In examining how municipal solid-waste management and climate change are related, the document provides a useful assessment of selected waste materials’ GHG implications at each point in the material’s life cycle.
Solid waste and emissions
Municipal solid waste (MSW) has much to do with GHG emissions. The materials in MSW represent what’s left after many steps have been taken, including the extraction and processing of raw materials, the manufacture of products, the distribution of products to market, the use of these products by consumers, and the management of these materials as waste, as EPA notes in the report.
The waste management phase has particular implications for waste that is buried. Bacteria decomposes much of the organic component of solid waste and, in so doing, produces equal parts carbon dioxide and methane gas. Approximately 18% of the buried carbon remains in the landfill, and the balance is converted into landfill gas consisting of carbon dioxide and methane, according to EPA estimates.
To measure the GHG impacts of MSW, EPA selected 21 single-material waste materials, organized into categories of metal, glass, plastic, paper, wood, food discards, yard trimmings, clay bricks, concrete, fly ash, and tires. These materials represent more than 65% by weight of MSW, according to EPA. EPA then developed a life-cycle inventory for each of the selected materials. The life-cycle inventory is streamlined because it examines GHG emissions only and isn’t a full analysis of all emissions from municipal solid waste management options. A more detailed analysis of EPA’s methodology is described in the report.
Reducing GHG emissions
Based on EPA’s analysis, source reduction represents an opportunity to significantly reduce GHG emissions. For many materials, the reduction in energy-related carbon dioxide emissions from the raw material acquisition and manufacturing process, and the absence of emissions from waste management, combine to reduce GHG emissions more so than other options.
For most materials, EPA believes that recycling represents the second best opportunity to reduce GHG emissions. For these materials, recycling reduces energy-related carbon dioxide emissions in the manufacturing process and avoids emissions from waste management.
EPA also believes that composting is a management option for food discards and yard trimmings. The net GHG emissions from composting are lower than those from landfilling of food discards, and higher than those from landfilling of yard trimmings.
In general, given the uncertainty in EPA’s analysis, the emission factors on which EPA’s analysis relies for composting or combusting these materials are similar.
Finally, EPA’s analysis concludes that the net GHG emissions from combustion of mixed MSW are lower than landfilling mixed MSW. According to the report, “[c]ombustors and landfills manage a mixed waste stream; therefore, net emissions are determined more by technology factors (e.g., the efficiency of landfill gas collection systems and combustion energy conversion) than by material specificity. Material-specific emissions for landfills and combustors provide a basis for comparing these options with source reduction, recycling, and composting.”
Regulatory consideration ahead
The global focus on climate change isn’t about to abate any time soon. EPA’s analysis of the GHG implications of MSW is a useful reminder that virtually every aspect of our lives has GHG implications and that waste management is no exception. The analysis also is useful in that, as climate change issues become more acute, waste management practices could be the subject of enhanced regulatory consideration. The Supreme Court ruled in Massachusetts versus Environmental Protection Agency that EPA is authorized to regulate GHG under existing law, and such gases from landfills are fair game.
Lynn Bergeson, regulatory editor, is managing director of Bergeson & Campbell, P.C., a Washington, D.C.-based law firm that concentrates on chemical industry issues. Contact her at [email protected]. The views expressed herein are solely those of the author. This column is not intended to provide, nor should be construed as, legal advice. Show More Content | 科技 |
2016-40/4923/en_head.json.gz/11612 | Home > Computing > What Obama’s second term means for technology What Obama’s second term means for technology By
A couple of years ago, President Obama was having dinner with Silicon Valley movers and shakers, including an ailing Steve Jobs. Although no recording of the dinner has been made available, legend has it that the President asked Jobs what it would take for Apple’s manufacturing jobs to come back to America.
“Those jobs aren’t coming back,” replied Jobs.
Whether that is true or not, during the campaign both President Obama and Governor Romney focused a lot of their energies on plans to bring that kind of manufacturing back to American shores. Now that the president has won a second term, what does that mean for American technology and manufacturing?
Mr. Obama’s plan was built on education and training, investing in clean energy, and giving tax incentives to companies that bring jobs back to America.
Jobs’ grim reply was based on the fact that American workers are no longer trained for the needs of high-tech manufacturing. Obama believes that making college more accessible, both at the university and community college level, is part of the solution for closing that skills gap.
A major focus during Obama’s first term was keeping tuition rates down, doubling funding for Pell Grants, and trying to get community colleges to work together with local employers to better train outgoing students for jobs that already exist locally. There’s every reason to believe that push will continue.
Another creative idea from the campaign trail was the creation of “manufacturing innovation institutes,” where businesses and research institutions work together to make sure that American research and development produces corresponding high-tech manufacturing jobs. How the federal government can spur that idea remains to be seen, but it could be a key step in reshaping the American manufacturing sector and making sure the next Apple keeps its assembly lines here at home.
During the campaign, Obama predictably hit Governor Romney on the Republican Party’s close ties with the oil industry. In his first term, the president advocated an “all of the above” energy policy. He’s willing to explore any source, including oil, which can help America reach a majority of energy independence by 2035. It is now expected that investment in alternative energy technologies will only continue in the next four years.
During his first term, President Obama pushed ahead with plans to double fuel economy standards for cars and light trucks by 2025, which is spurring the auto industry’s investments in hybrid and electric vehicles and has led to an American manufacturing boom of sorts in battery technology. Since these rules are now under no threat of repeal, cars and light trucks are expected to sip less and less gas in the next four years. Horsepower may be a different story.
One of the places where we heard repeats of 2008’s campaign pledges was in the area of corporate tax structure. The president promised to end tax breaks for companies that participate in outsourcing during his first term and promised to do the same in his second term (which you can assume to mean that the first effort didn’t go so well).
He also wants to provide tax incentives for companies to bring jobs back to the US. Considering Apple currently enjoys a 1.9 percent tax rate on its foreign earnings, I’m not sure how much of an incentive the US government can provide. Perhaps they will mandate that all federal employees will have to buy an iPad Mini. At least all of our forest rangers will have something to do while they’re stuck in those fire towers.
Any conversation about tax policy centers on Congress. In the coming days, it will be interesting to see if our Republican House makes any overtures of bipartisanship to the president now that they know they are stuck with him for another four years. If history is any indication, don’t hold your breath. There’s probably better chance of an impeachment investigation than bipartisan tax reform.
Whatever the case, many pundits believe that the president will be bolder in his second term, enacting whatever policies he can without congressional approval if Congress proves to be as intransigent as they have been in the past. A lot of the policies mentioned above came to pass through that same kind of executive maneuvering. Because of that, the next four years of America’s technology and education policy might be just as much decided in courtrooms as in the Oval Office. | 科技 |
2016-40/4923/en_head.json.gz/11613 | Home > Cool Tech > NASA confirms discovery of first alien planet in… NASA confirms discovery of first alien planet in ‘habitable zone’ By
We don’t usually get too in-depth with science coverage here at Digital Trends, but this breakthrough certainly deserves mention: NASA announced today that it has confirmed the discovery of the first known planet that could have liquid water on its surface, thus making it potentially habitable for alien life. The planet, dubbed Kepler-22b, is one of more than 1,000 new planet candidates discovered during the Kepler mission.
The reason Kepler-22b is particularly notable is that it orbits in the middle of the so-called “habitable zone” of its star, which NASA calls “sun-like,” but is slightly smaller and cooler than our sun. (Earth orbits in a similar location to our sun.) Two other planets have been found in the habitable zones of their stars, but they orbit on the outer edges of what could be classified as habitable.
“This is a major milestone on the road to finding Earth’s twin,” said Douglas Hudgins, Kepler program scientist. “Kepler’s results continue to demonstrate the importance of NASA’s science missions, which aim to answer some of the biggest questions about our place in the universe.”
Kepler-22b measures roughly 2.4 times the radius of Earth, and is located approximately 600 light-years away. It’s year — the amount of time it takes a planet to orbit its sun — is nearly the same as Earth’s, taking 290 Earth-days to make a full orbit. The planet, along with 48 other planets that may also lie in a habitable zone, was discovered using a number of ground-based telescopes, which monitor the Cygnus and Lyra constellations, which are home to more than 150,000 stars.
To confirm that a heavenly body is, in fact, a planet, Kepler scientists watch to see if a planet candidate crosses in front of a particular star, something known as “transit.” In order for a planet candidate to be confirmed as a planet, it must transit a minimum of three times. It was through this scrutiny that Kepler-22b achieve full planet status.
Now that Kepler-22b has been confirmed, Kepler scientist will begin to examine it more closely to find out what it is made of (solid, liquid or gas), and whether or not it has an atmosphere that could actually support life as we know it.
[Images via NASA/Ames/JPL-Caltech] | 科技 |
2016-40/4923/en_head.json.gz/11614 | Home > Cool Tech > U.S. Navy to test 32 megajoule EM Railgun in the… U.S. Navy to test 32 megajoule EM Railgun in the coming weeks By
The Office of Naval Research (ONR) announced today that they will begin testing an advanced Electromagnetic Railgun (EMR) within the next few weeks. The development and testing of this advanced EMR is the result of a $21 million contract awarded to BAE Systems by the Office of Naval Research roughly two years ago. For those that may not know, the ONR is the office within the United States Department of the Navy that facilitates all science and technology programs for the U.S Navy and Marine Corps through various institutions, such as universities and government laboratories.
While most munitions both heavy and small depend on chemical propellants (like gunpowder), the EM Railgun launcher (as you may have guessed from its name) utilizes magnetic energy instead. The EM Railgun propels a conductive projectile along metal rails using a magnetic field powered by electricity. The magnetic field produced by the high electric currents thrusts a sliding metal conductor between two rails to launch a projectile at velocities of 4,500 to 5,600 mph. By contrast, the average velocity of a chemical propelled weapon is limited to about 2,700 give or take.
So what does that mean? Well, this increased velocity should allow for the Navy to reach targets of up to 50 to 100 nautical miles away or, if your inner sea-dog is a little rusty, about 57 to 115 miles out. Navy planners hope to eventually increase that range even further to distances up to 220 nautical miles (253 miles).
According to ONR, this increase velocity and extended range will give sailors multi-mission capability, and allow them to conduct precise naval surface fire support. In addition, ONR states that the EM Railgun may provide effective ballistic missile defense.
BAE Systems EM Railgun was delivered to the Naval Surface Warfare Center (NSWC) Dahlgren on January 30, 2012 and features a 32-megajoule payload. To add some perspective, one megajoule of energy is equivalent to a one ton car traveling at 100 miles per hour. Close | 科技 |
2016-40/4923/en_head.json.gz/11616 | Home > Gaming > The next Xbox is codenamed ‘Loop,’ will… The next Xbox is codenamed ‘Loop,’ will feature heavy Kinect integration and be smaller and cheaper than the 360 By
A new report from the blogger MS Nerd claims that the next generation of Xbox is deep in the development stages, is operating under the code name “Loop,” and the blogger has issued details on the system’s hardware.
We won’t blame you if you want to classify this as a rumor and throw it in the corner with all the other possible next-gen console rumors, but evidence is beginning to mount that a new Xbox is coming soon, and so it makes sense that details would begin to leak. Shy of Microsoft following its employees at all times with snipers trained on them, a project this massive will be impossible to keep entirely quiet. That said, until there are more sources confirming the details, take this with a grain of salt.
According to the blog, the next Xbox is being developed under the codename “Loop.” That name will certainly not stick to the final product, but if that is indeed the development name, expect to hear it pop up often until the system is officially debuted. The device was originally said to be listed under the name XboxTV, but that seems to have either been changed, or never solid to begin with.
The hardware will supposedly feature:
“A modified Win9 core. It will use a Zune HD-like hardware platform—a “main” processor with multiple dedicated assistive cores for graphics, AI, physics, sound, networking, encryption and sensors. It will be custom designed by Microsoft and two partners based on the ARM architecture.”
If true, this is by far the most detailed single account we have had on the next Xbox, but it is in line with all the other rumors we have heard floating around. The current top rumor has the next Xbox debuting in 2013, but so far details on hardware specifics have been scattered.
The blogger also claims that the “Loop” will be positioned with cost in mind:
“It will be cheaper than the 360, further enabling Kinect adoption. And it will be far smaller than the 360. It will also demonstrate how Windows Phone could possible implement Win9’s dev platform on the lower end.”
As is the standard MO, Microsoft has had no comment about the newest rumors/possible leak, but we certainly haven’t heard the last of this. | 科技 |
2016-40/4923/en_head.json.gz/11623 | In Energy and Infrastructure, Government
New Spatial Information Act for Australia
By David Hocking
Geospatially speaking, we’ve created a bit of a mess in this country [Australia]. Over time, we’ve developed a mish-mash of conflicting policies and laws across all levels of government: we disagree about what kind of spatial information we need to do what tasks; we squabble over money (well, that’s probably nothing out of the ordinary); but arguably the worst sin of all, we don’t share our information.
Most of the time we muddle through the exasperating chaos but the process is, at best, inefficient and the results, unreliable. Wouldn’t it be nice if we could put aside our childish quarrels and petty differences of opinion and commit funds to creating and implementing a strong geospatial policy? In other words, stop bickering and get on with it?
The New South Wales (NSW), Australia government is now, in fact, attempting to do just that. Earlier this year, it decided to review the state’s Planning Act and it put out a White Paper, the first part of a long process to extract ideas and comments from those who have something to say about planning.
The Spatial Industries Business Association (SIBA) responded to this White Paper with a range of arguments around the need for a spatial data infrastructure (one of our pet issues) so that an appropriate platform can be established for transparent and factual planning.
We also showed how this would link to other policy challenges, suggesting that such an infrastructure would allow other diverse government service domains to be brought into the same spatial platform. It’s clear to us that interlinked spatial information would facilitate integrating and harmonizing data related to roads, bridges, utilities, rail, ports, airports, housing, industry, cities and the environment, among many other planning inputs.
In fact, I still have trouble understanding why all of these elements have not been linked together before now, for logical, cost effective and transparent planning. I can’t help but wonder why, as a country, we persist in making life difficult for ourselves.
But what really threw our shortcomings hard in our national face recently was our latest batch of natural disasters. Australia suffers its fair share of these – more in some years than in others – and they demonstrate all too vividly how poor our geospatial information is and how fractured our cross-jurisdictional policies are, as well. When these disasters happen we all lament our shortcomings, but it can be difficult to translate those lamentations into actions.
Politicians, of course, want quick wins. They latch on to the prevailing discontent in the face of these disasters and look for happy, fast answers. For example, the recent floods in Queensland prompted an agreement with the insurance sector on a common definition of flood. Not a bad thing in itself, especially for the families and businesses financially compromised by the floods – but it didn’t solve the real problem. The point is, insurance companies use existing data – assuming they can get access to them in the first place from the myriad government organizations that hold geospatial data in silos. They don’t necessarily understand the accuracy and fit-for-purpose issues, so they rely wholly on the data to which they can get access. If they have any doubts about accuracy, they cover their risk by lifting the price of insurance – a reasonable business response.
But what SIBA did during the flood crisis was to look at the problem through a spatial prism. We went to the core of the issue – planning. Flood damage isn’t caused by bad definitions of flood, nor is it caused by the cost of insurance. It’s caused by water, obviously, and by historical decisions that – perhaps sensibly at the time – put cities and towns on rivers. Over the years, more bad planning decisions have made the problem worse. SIBA responded to the various inquiries that followed the Queensland floods but we didn’t succeed in convincing government to invest in any measures to improve the spatial data.
But back to NSW and its breakthrough.
The responses to the White Paper generated a Green Paper (the next stage in the legislative process) and this Green Paper recommended that the government create a Spatial Information Act, in support of the Planning Act, that will “facilitate a whole-of-government approach to the application of information technology to spatial data (and not confined to planning information).”
This is a huge stride forward and the implications – for the rest of the country and for the spatial industry – are equally huge.
The Green Paper went on to say that there is no coherent legislative framework for sharing government-held spatial information between agencies, or from agencies to industry and the community. It also said: “reform of the planning system now provides a catalyst for enactment of legislation – the Spatial Information Act – to resolve these problems,” and “one of the most important features of access to spatial data dealt with by this legislation will be the creation of a geoportal where key government spatial data sets can be accessed by other government agencies and members of the public.”
Significantly, it also recognized how important it is that the minister with responsibility for land and property should also have charge of the Spatial Information Act.
Key geospatial recommendations:
Spatial Information Act
A single agency to coordinate spatial data
Comprehensive datasets to include land-use, cadastre, planning, features, social and government services, economic, valuation, building, licensing and registration data, infrastructure - water, gas, electricity, telecommunications and transport - among many others (in fact, it looks as though they have included all primary datasets across all levels of the economy)
Facilitate a whole-of-government approach as the foundational basis for all spatial information held across all government levels
Permit external private sector services to integrate data about telecommunications networks, gas pipelines and the like into a common database
Data custodian arrangements, including determining if there should be a central register of separate registers for each council (to be undertaken by the Coordinating Committee for spatial information)
Creating a geoportal where key government spatial datasets can be accessed by other government agencies and members of the public
A “duty to cooperate” to be included in the Spatial Information Act to ensure that data held or created by councils, state-owned corporations and agencies of the state are consistent, particularly with Strategic Plans
Basic access to the geoportal to be free (that said, there are some disturbing qualifications around a statement that says “information may be provided in a format that prevents re-use for commercial purposes,” which SIBA will address in the next phase)
Establishment of the appropriate custodian for each dataset
Ensuring government-held spatial datasets can be searched and combined with other datasets, so that the minister will have the capacity to establish metadata requirements for spatial datasets and spatial data services and to prescribe requirements for interoperability and harmonization
The Spatial Information Act will establish a coordinating committee to submit recommendations to the Minister for Finance and Services on initiatives to promote infrastructure for spatial information in New South Wales and to assist the minister with the implementation and use of these initiatives.
The Green Paper recommendations still need a bit of work but, by and large, SIBA believes that the Review Committee has taken a very big step in the right direction. Any amendments we propose will be minor.
In the decade since it was formed, SIBA has focused its attention on responding to more than 70 policy inquiries, ranging from water to planning and from transport to biosecurity. It’s gratifying and rewarding to see our labor bearing fruit.
We believe the way forward for the industry is to shift away from trying to define the industry with a simple word or two, toward ensuring that geospatial information and technologies underpin everything (well, almost everything).
Australian governments have gradually become more and more aware of how important spatial information and technologies are to policy as a whole. Sectors such as transport, defense and insurance are realizing it, too, and although there’s still a long way to go, the profile is finally rising and that’s encouraging.
The media’s growing attention and recognition helps, as well. Everyone is finally catching on. Details of all our recent submissions are on our website.
Published Wednesday, September 12th, 2012
Written by David Hocking
Energy and InfrastructureGovernment
If you liked this article subscribe to our newsletter...stay informed on the latest geospatial technology
Choose One... Accounting
Advertising/Marketing/PR
Banking & Securities
Call Center Outsourcing
Energy, Chemical, Utilities
Financial Services - Other
Government - State & Local
High Tech - Hardware
High Tech - ISP
High Tech - Other
Hospital, Clinic, Doctor Office
Hospitality, Travel, Tourism
Medical, Pharma, Biotech
Software - Finance
Software - Healthcare
Software - Other
Support Outsourcing
VAR/Systems Integrator
Other Subscribe | 科技 |
2016-40/4923/en_head.json.gz/11649 | Rohm and Haas Scientists Win Heroes of Chemistry Award
Rohm and Haas Co. of Philadelphia has announced that a team of its scientists, who developed the Aquaset technology, were recently among the winners of the 2006 American Chemical Society's (ACS) Heroes of Chemistry Award. The Rohm and Haas scientists, one of six groups, were honored as 'chemical innovators whose work has led to the welfare and progress of humanity.'
"Heroes save lives and change them for the better. This year's Heroes of Chemistry have improved our lives through their inventions," said E. Ann Nalley, ACS president, referring to this year's winners.
The Rohm and Haas scientists have demonstrated life-changing potential with the introduction of the Aquaset product line, a pioneering technology made without formaldehyde or formaldehyde-generating materials. The technology provides environmentally-friendly, state-of-the-art binder options to producers of fiberglass insulation for building and construction, appliances and other products for consumers and industry. The Aquaset product line satisfies an increasing customer need around 'green' technology that can positively impact safety and health conditions for the industry, consumers and the environment. Past honorees of the Heroes of Chemistry Award have discovered world-changing inventions that include renewable energy technologies, replacements for ozone-depleting compounds and medications to combat high blood pressure.
DWM is a registered trademark of Key Communications Inc.
Publisher does not accept responsibility for statements or claims made by advertising placed on this website.
Signed articles represent the opinions of the writer and not necessarily those of the publisher.
All information published on this website is believed to be accurate.
No responsibility is assumed for losses incurred due to errors in text and/or graphic content. | 科技 |
2016-40/4923/en_head.json.gz/11664 | E&S Home > Vol. 9, Iss. 5 > Art. 3 > Abstract
Open Access Publishing Assessing Photoinduced Toxicity of Polycyclic Aromatic Hydrocarbons in an Urbanized Estuary
M. Vo, South Carolina Department of Natural ResourcesD.E. Porter, University of South CarolinaG.T. Chandler, University of South CarolinaH. Kelsey, University of South CarolinaS.P. Walker, University of South CarolinaB. E. Jones, University of South Carolina
Full Text: HTML Download Citation
Increases in contaminants associated with urban sprawl are a particular concern in the rapidly developing coastal areas of the southeastern United States. Polycyclic aromatic hydrocarbons (PAHs) are contaminants associated with vehicle emissions and runoff from impervious surfaces. Increased vehicular traffic and more impervious surfaces lead to an increased loading of PAHs into coastal estuarine systems. The phototoxic effect of PAH-contaminated sediments on a sediment-dwelling meiobenthic copepod, Amphiascus tenuiremis, was estimated in Murrells Inlet, a small, high-salinity estuary with moderate urbanization located in Georgetown and Horry Counties, South Carolina, USA. Field-determined solar ultraviolet radiation (UV) and UV extinction coefficients were incorporated into laboratory toxicity experiments, and a model was developed to predict areas of specific hazard to A. tenuiremis in the estuary. The model incorporated laboratory toxicity data, UV extinction coefficients, and historical sediment chemistry and bathymetric data within a spatial model of sedimentary areas of the estuary. The model predicted that approximately 8–16% of the total creek habitat suitable for meiobenthic copepods is at risk to photoinduced PAH toxicity. This area is in the northern, more developed part of Murrells Inlet.Key words
Hazard modeling, photoinduced toxicity, polycyclic aromatic hydrocarbons, spatial modeling, urbanized estuary | 科技 |
2016-40/4923/en_head.json.gz/11665 | Best of ECT News
E-Commerce Times > Best of ECT News
| Building Customer Loyalty on Shifting Sands
By Vivian Wagner • CRM Buyer • ECT News Network
This story was originally published on Dec. 10, 2012, and is brought to you today as part of our Best of ECT News series.
When Double Cola wanted to encourage its customers to keep coming back for more of its vintage fizzy drink, it started a rewards program. Using tried-and-true loyalty methods, the company offered customers the opportunity to turn over the cola's caps, earn points, and get discounts and merchandise.
"These guys wanted to come into the current generation," said Michael Levy, president of Online Rewards, the company that designed Double Cola's rewards program.
"It's been extremely successful. They developed a database, created a buzz, and fostered customer advocates," he told the E-Commerce Times.
Loyalty is all about creating a positive experience with a brand so that customers will want to form and maintain a relationship with it, noted Levy.
"We create loyalty experiences," he added. "The nature and form of meaningful loyalty has to speak to customer experience."
Loyalty programs have become increasingly popular, partly in response to the fact that contemporary consumers need incentives to continue purchasing a particular brand.
"We won't do much without a reward," explained Levy. "It's become part of our DNA. Our parents once did something for nothing, but our generation asks, 'What are you asking me to do, and what do I get for doing it?'"
To be effective, a loyalty program must give those customers something that reflects the spirit of a brand. A cash reward alone doesn't do the trick.
"You can give them money, but that doesn't really speak to the company," said Levy. "The company has to find a way to make the experience that it wants to create. The loyalty program must go beyond just cash back -- it must create experiences."
The more those experiences are woven into the daily lives of customers -- through apps and push notifications, for instance -- the more effective they will be.
"Location-based mobile apps enable businesses to reward loyalty in a very powerful way," Sam Ganga, executive vice president for commercial operations with DMI, told the E-Commerce Times.
"Imagine receiving a digital coupon on your mobile device for a free pastry at your favorite coffee shop, just as you're approaching," he suggested. "Nothing beats mobile apps for their ability to reach an incredibly broad audience of customers and establish remarkably intimate connections with them. It's a way to enable customers to put your business in their pocket -- or hold your products and services in the palm of their hand."
Because of that intimacy, marketing through mobile devices is key to building a sense of loyalty.
"When the customers are enabled to earn and redeem rewards via a mobile device they always carry with them, the engagement increases dramatically," Roli Agrawal, CEO of ReZoop, told the E-Commerce Times.
ReZoop's loyalty model relies not on apps but on QR codes that can be read with any mobile device.
"Being universal means customers do not have to download an app for every business they visit," explained Agrawal.
"A ReZoop QR code is created for the registered business," he continued. "This QR code can be printed off from our website. ReZoop encourages businesses to offer their customers a welcome reward when they join the business loyalty program. Businesses achieve an increased customer participation in our digital loyalty program in less than one month."
Loyalty and the Bottom Line
Keeping existing customers is much less expensive than acquiring new ones, making it cost-effective for businesses to develop and maintain relationships with the customers they already have.
"Acquisition costs are significantly greater than retention costs," said Levy. "In order to get a new customer, one must communicate and let this new customer know you have something they might be interested in. You don't have that cost when you have an existing customer, because they know what it is you're selling, and they've bought it from you in the past. Acquisition costs can be three to 50 times the cost of retention costs."
Loyal customers come with another benefit: They market a business to their friends and relatives, and that kind of word-of-mouth marketing is priceless.
"We want customer advocates, because they become marketing beacons for the organization," said Levy.
Loyal customers, after all, are the ones who will post status updates about a brand on social media sites.
"Social media is the new name for old-world word-of-mouth advertising, and loyal customers are the only source of this free advertisement," said Agrawal.
The best loyalty programs are long-lasting, giving time for the relationship between a brand and its customers to develop and deepen.
"A loyalty program is ongoing," said Levy. "They don't necessarily last forever, but this is about developing an ongoing relationship."
Freelance writer Vivian Wagner has wide-ranging interests, from technology and business to music and motorcycles. She writes features regularly for ECT News Network, and her work has also appeared in American Profile, Bluegrass Unlimited, and many other publications. For more about her, visit her website.
More by Vivian Wagner | 科技 |
2016-40/4923/en_head.json.gz/11677 | | July 18, 2014 ABB Wins $35M HVDC Upgrade Order In Canada
Upgrade to enhance grid reliability, improve efficiency and enhance power availability
ABB, the leading power and automation technology group, has won an order worth about $35M from Hydro-Québec to upgrade the 350 megawatt (MW) Madawaska high-voltage direct current (HVDC) transmission link that connects the grids of New Brunswick and Hydro Québec in southeast Canada. The order was booked in the second quarter of 2014.
The project scope includes installation of ABB’s MACH control and protection system and the upgrade of the valves and valve cooling system. The back-to-back converter station has been in operation for more than 25 years and the modernization is expected to significantly improve grid reliability and help reduce maintenance needs. The new station is scheduled to go into full operation in 2016.
“This upgrade will enhance power availability, reduce outages and improve grid reliability in the region” said Claudio Facchin, head of ABB's Power Systems division. “The project also reiterates our continued focus and commitment to growing our service business.”
ABB has built up significant experience in upgrade of HVDC links around the world as many such installations are coming of age. This is the 21st major HVDC modernization project and the 15th upgrade of control and protection systems awarded to ABB since 1990.
ABB’s MACH system is the world's most extensively deployed control solution for HVDC and Flexible Alternating Current Transmission Systems (FACTS) installations, with over 1,100 such systems in operation throughout the world.
ABB pioneered HVDC technology nearly 60 years ago and has been awarded around 90 HVDC projects representing a total installed capacity of more than 95,000 MW, which accounts for about half of the global installed base. ABB remains at the forefront of HVDC innovation and is uniquely positioned in the industry with in-house manufacturing capabilities for all key components of HVDC systems, including power semiconductors, converters and high voltage cables.
Hydro-Québec generates, transmits and distributes electricity, mainly using renewable energy sources, in particular hydroelectricity. It is Canada’s leading electric utility and among the biggest in North America.
About ABB
ABB is a leader in power and automation technologies that enable utility and industry customers to improve performance while lowering environmental impact. The ABB Group of companies operates in around 100 countries and employs about 150,000 people. For more information, visit www.abb.com.
SOURCE: ABB
Contact ElectricNet Copyright © 1996-2016 VertMarkets, Inc. All Rights Reserved. Terms of Use. Privacy Statement. | 科技 |
2016-40/4923/en_head.json.gz/11704 | Seal populatons are finding it increasingly difficult to survive, especially along the Alaskan coast where their numbers have plummeted in the last three decades due to a combination of climate change, species invasion and human technology.
Overshoot: The Human Trajectory
MP3 audio of May 6, 2006 lecture by Overshoot author, William Catton
By EV World
William Catton wrote his landmark book , Overshoot: The Ecological Basis of Revolutionary Change nearly 30 years ago. The premise of his thesis is that any species -- including man -- can be too successful in exploiting ecological niches and their accompanying resources. Such is the case of humanity's dependence on the finite resource called oil, which is why Professor Catton was asked to address the Sustainable Energy Forum on Peak Oil and the Environment in Washington, D.C. this past May.
"My intention today, using an evolutionary time perspective, is to emphasize that the changes coming to our future lives will be no passing inconvenience. Overwhelming dependence on an exhaustible resource has roots in a trend established long prior to its crescendo in the last half century."
And for the next 30 minutes, Catton dons his professorial mantle and proceeds to walk through a lecture he has surely given countless times. Catton points out that he carefully and deliberately avoids the term "crisis" when referring to the problems facing modern man; preferring instead the term "predicament" because what looms ahead can't be dismissed as a temporary storm that can be ridden through. "The consequences of our uses of hydrocarbon fuels will never be adequately understood if viewed apart from a context provided by principles of ecology," Catton explained. "It's become essential to recognize that all creatures, human or otherwise, impose a load upon the environments that surround them, the ability of that environment to supply what they need, and to absorb and transform what they excrete or discard. "What is meant by an environment's carrying capacity for a given kind of creature living in a given way of life, is the maximum persistently feasible load. It's a load just short of what would begin damaging that environment's ability to support life of that kind."
The critically important qualifier in that definition, Catton insists, is the phrase "living in a given manner"
The thrust of his argument is that once a culture reaches the carrying capacity of it resources, it can no longer increase in numbers or raise its collective standard of living, especially both.
"Most people still resist seeing the relevance of the carrying capacity concept for the human condition. Without this conceptual aid to vision," he continued, "they fail to see the serious effects of overuse of the environment or of a resource." In the case of the Peak Oil and the Environment conference, it's oil, he emphasized.
So, it is with some irony that he carefully builds the case that modern, energy-intensive man has assumed the energy consumptive qualities roughly equivalent to an Ultrasaur-- once thought to be the world's largest dinosaur -- at a population density of 64 per square mile. He emphasizes that the more there are of us and the larger our energy/resource consumption footprint, the harder it is for the environment to provide the three basic categories of services upon which all life is dependent: an environment from which we obtain energy to live, in which to carry on life's activities, and into which to discard our metabolic products.
"The more colossal we are and the more of us there are… the less likely we are to keep these three vital functions from interfering with each other," he stated, cautioning that it is as important to consider the consequences of our collective impact on the "into" as it is the "from" when discussing man's prolific use of hydrocarbon fuels.
Catton is has lost none of his didactic charm and persuasiveness, so if you're looking for a thought-stimulating lecture, then be sure to listen to his presentation in its entirety. You can do so by using either the Windows or Quicktime MP3 players in the right hand column or by downloading the file to your computer hard drive for playback on your favorite MP3 device.
EV World extends its thanks to the organizers of the Sustainable Energy Forum for granting us permission to record the event. Times Article Viewed: 17119
Published: 02-Aug-2006 READER COMMENTS | 科技 |
2016-40/4923/en_head.json.gz/11824 | Nintendo to launch 3D-capable DS handheld
The device will allow 3D gaming without the need for special glasses
Martyn Williams (IDG News Service) on 23 March, 2010 20:07
The Nintendo DS. Its successor, provisionally called the 3DS, will be launched in Japan before the end of March 2011
Nintendo is planning to launch a new version of its handheld DS gaming device on which users can get the illusion of 3D without using special glasses, it said Tuesday.The device, provisionally called the 3DS, will be launched in Japan before the end of March 2011, the company said. No further details were provided, but Nintendo said it would disclose more information at the upcoming E3 gaming event. E3 is scheduled to take place in Los Angeles from June 15 to 17.3D has emerged as the latest big trend in consumer electronics and most major TV makers are planning to launch 3D TV sets this year. Those sets display two slightly different images, one for each eye, and glasses are required so that each eye sees the appropriate image and the brain is tricked into perceiving depth in the picture.An alternate way of achieving a 3D effect is to place a filter directly in front of the display panel that consists of thousands of small lenses. The lenses focus each image to a fixed point in space and the viewer gets the 3D illusion as long as they are watching from that position. It has the advantage of not requiring glasses but restricts the viewing angle and typically means only a single person can see the 3D effect.The new DS will launch as sales of the handheld are declining. In the last nine months of 2009 the company sold 23.4 million DS devices, a drop of 9 percent on the same period in 2008. Software sales during the same period were down by a quarter to 121.4 million units, according to Nintendo figures.
Nintendo most recently relaunched the DS in November in Japan, when it put out a version of the dual-screen handheld with larger screens. Each of the screens on the DSi LL is 4.2 inches in size, which makes them about double the area of the screens on the DS Lite and larger than the 3.25-inch screens on the DSi.The timing of Tuesday's announcement is unusual because it comes just days before the March 28 launch of the larger DS in the U.S., where it is called the DSi XL. The handheld was launched earlier this month in Europe.By disclosing its plans for a 3D version of the DS Nintendo risks potential customers putting off planned purchases of the DSi XL until it discloses more details about the upcoming 3D version.
Tags Nintendogamesnintendo ds | 科技 |
2016-40/4923/en_head.json.gz/11840 | Friday, June 6, 2014 - 11:13am New Look At Apollo Rocks Finds Evidence Of Moon's Birth Scott Neuman Updated: 11 months ago. An Apollo 12 astronaut makes footprints on the surface of the moon, Nov. 19, 1969. Rocks collected on the mission were among those recently re-examined by a team of German astronauts. Tweet
NPR — A new analysis of rocks collected by Apollo astronauts on the moon more than 40 years ago bolsters the leading theory of our natural satellite's origin that it formed from a collision between a nascent Earth and another object some 4.5 billion years ago.
The theory, first suggested by William Hartmann and Donald Davis in 1975, goes like this: The early solar system was a lot like a demolition derby, with big chunks of rocky material careening through space and frequently colliding with one another. One of those wayward objects, a planet-sized chunk of material scientists call Theia, slammed into young Earth, nearly splitting our world in two. The resulting wreckage was captured by Earth's gravity and coalesced into the moon.
The Hartmann-Davis theory has long been the best explanation of the available data, but computer models have predicted that such a collision should have left the moon with 70 percent to 90 percent of its material from Theia, and only 10 percent to 30 percent from Earth. But until now, no one has found any telltale sign of Theia in the moon rocks.
Now German scientists, publishing in the journal Science, say their analysis of rocks brought back by Apollo 11's Neil Armstrong and Buzz Aldrin, as well as the Apollo 12 and Apollo 16 missions, suggests that the moon may be about a 50-50 mix of material from Earth and Theia, whose name comes from the Greek goddess said to be the mother of the moon.
The Independent, reporting on the new study, says:
"[The] analysis of lunar rock samples from three Apollo missions ... has revealed distinct isotopic differences with terrestrial rock. Scientists believe these differences are remnants of the original [Theia].
" 'The differences are small and difficult to detect, but they are there. This means two things: firstly, we can now be reasonably sure that the giant collision took place. Secondly, it gives us an idea of the geochemistry of Theia,' said Daniel Herwartz of George August University in Gottingen, Germany, lead author of the study."
The oxygen isotopes found indicate that unlike Earth, Theia belonged to a class of rare meteorites known as enstatite chondrites that are thought to have formed close to the sun.
Herwartz told the BBC that his team's new research puts the last big doubt about the moon-impact theory to rest.
"It was getting to the stage where some people were suggesting that the collision had not taken place," he told BBC News.
"But we have now discovered small differences between the Earth and the Moon. This confirms the giant impact hypothesis."
Tags: View the discussion thread. About | 科技 |
2016-40/4923/en_head.json.gz/11937 | NewsScience The choice: kudos in space, or education on Earth
Tuesday 2 February 2010 00:00 BST
America's abandonment of a manned return-mission to the Moon by 2020 raises the old question of whether it is better to put people into space, with all the huge safety costs that incurs, or to rely on relatively inexpensive machines, such as the robotic rovers that have performed so well exploring the surface of Mars.It was Barack Obama's judgement yesterday that the $81bn Constellation programme, which had the ultimate aim of a human landing on Mars by the middle of the century, could not be justified. But the arguments for manned space missions are as much to do with national kudos as scientific necessity. Human beings can of course make quicker and better decisions than any robot, but their presence in space also gives a country a sense of pride and international position – membership of an exclusive club of rich nations.
It is interesting that one of the first people to comment on President Obama's announcement was the former head of Nasa, Michael Griffin, who said that the United States is effectively abandoning the field of manned spaceflight to competitors such as Russia and China. "It means that essentially the US has decided that they're not going to be a significant player in human space flight for the foreseeable future," Mr Griffin said.
Fear of what other countries could do in space has always been a driver for manned exploration. The first and only manned mission to the Moon, the Apollo programme, was inspired by this kind of competitive spirit, which was not altogether friendly given that it was carried out at the height of a cold war between the two nuclear superpowers.
America won that race, but only after it had diverted huge amounts of resources to the effort. Many people have questioned what good it did in practical terms.
There is of course an argument for sending astronauts back to the Moon based on something bigger than mere national pride. It is about exploring the frontiers of knowledge with real people. The existence of men and women with the "right stuff" would act as inspiration for millions of children who would otherwise fail to be interested in science.
But it could also be argued that inspiring the next generation might be better done by spending the trillions of dollars of a manned Moon mission on better schools and infrastructure here on Earth – the real place where we actually live.
That's the calculation President Obama has had to make. More about: | 科技 |
2016-40/4923/en_head.json.gz/11957 | Posted on January 01, 2005 Tweet
Symantec & Veritas: PB & J or oil & water?Let’s see: a $13 billion acquisition resulting in a company with 13,000 employees. This is starting to sound unlucky to me.
Although I should have figured out the Symantec-Veritas deal by now, I’m still scratching my head. We knew that Veritas was in play back in December, and the financial analysts were betting on EMC, Hitachi, IBM, Microsoft, or even Oracle as potential suitors. But when it comes to Veritas, I think EMC would rather beat ’em than buy ’em; Hitachi is primarily a hardware vendor that seems content to partner for software expertise; IBM has most of what Veritas has; Microsoft is more likely to acquire a second-tier backup software provider or go it alone; and Oracle hadn’t even forked over the $10.3 billion for PeopleSoft back in mid-December.My long-odds money was on a Hewlett-Packard acquisition of Veritas (although even Carly probably couldn’t have talked the HP board into that one).I suppose that from a shareholder perspective the deal may make some sense. But from a product and competition angle, I don’t get it.Sure, I understand the simplistic view: Storage + Security = Good. But even though the two technologies may be converging, they’re still distinct and I don’t think that end users really care whether they buy from one vendor or two (or more).So much for the one-stop-shop argument in favor of the merger. Besides, melding storage and security isn’t new. Computer Associates has been doing that for years, and just because they’ve integrated the two doesn’t necessarily mean that they’ve fared any better in either market.Symantec had to gear up to combat Microsoft on the security front, and Veritas was staring at the lights of the oncoming train from Hopkinton, but will the combined company fare any better in those encounters? Only time (and a lot of it) will tell, but I’m doubtful.On the plus side, both companies get expanded channels. And there’s virtually no product overlap. But the huge challenge is how/when the companies’ products will be integrated. And their executives’ often- repeated mantra about “synergies” between the two product lines is highly debatable.Unless they’re shareholders, I’d guess that most end users greeted this deal with a big yawn-despite the enormity of it and the surprise factor. The two companies’ product lines will continue to be top shelf, and service and support won’t suffer. Click here to enlarge imageOne thing’s for sure: M&A mania in the storage market is back in full swing as we cruise into 2005.
Dave SimpsonEditor-in-chiefWhy the Symantec-Veritas deal does make senseWhen Symantec first spoke to me about its new business division-Symantec Enterprise Administration (SEA)-last spring, I was a little skeptical, or at least confused, about the company’s direction.Their vision of an integrated management suite spanning security, systems management, and storage management, grand as it was, lacked substance. It did, however, jibe with industry trends toward more-unified data management.But my gut told me there was a bigger story here, and last month the story unfolded. While the idea of a security company-even one the size of Symantec-moving into the storage biz is difficult for some to grasp, I’m among those who believe that the Symantec-Veritas deal makes a lot of sense-and for many reasons. (I also applauded EMC’s decision to buy Documentum for $1.7 billion.)I’d argue that the merger is a testament to three larger, more-powerful trends at work in the industry today, as evidenced by the increasing number of storage/non-storage mergers and acquisitions over the past year or so: The ongoing convergence of traditional networking and storage networking; The criticality of information across the enterprise; and The demand for unified data platforms and utility computing.That said, do I think that the Symantec-Veritas deal will result in a sudden surge in end-user interest in storage security products? No. In fact, I’m not certain what form “storage security” will ultimately take, nor do I think discussions about this merger should center on this point.I’d argue that data/information availability, wherever the data resides-not storage security-is the issue at hand. International Data Corp. has pegged the total market opportunity for the combined company at $35 billion, growing to $56 billion by 2007. I think it’s safe to say that the bulk of the opportunity won’t come from sales of storage security products, in whatever form they take.Regardless, with projected revenue of $5 billion next year, Symantec- Veritas will be a major force. The next question is, what’s next? Will Symantec stop here? Maybe it will take EMC’s lead and delve into compliance or even content management. After all, it’s all information. Stranger things have happened. Click here to enlarge imageHeidi BiggarSenior Technical EditorAT A GLANCESymantec-Veritas Expected value: $13.5 billion in an all-stock deal Symantec shareholders to own about 60%, and Veritas shareholders 40%, of the combined company, which will retain the Symantec name Deal is expected to close in the next quarter Combined company will be the world’s fourth-largest software vendor, with combined revenues of more than $4 billion Management expects 75% of revenues to come from enterprise customers, with 25% coming from sales to consumers John W. Thompson to remain chairman and CEO, with Gary Bloom (formerly Veritas’ CEO) as vice chairman and president Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. 0 Comments (click to add your comment) | 科技 |
2016-40/4923/en_head.json.gz/11971 | Atmospheric Water Generators Available Now Announces Molecule New Water Technologies
Molecule New Water Technologies has announced that it is now able to provide Atmospheric Water Generators to consumers across the United States. These systems, which are environmentally friendly, are able to convert humidity into water and then store it for later use.
MARBLE FALLS, Texas, Oct. 30, 2014 /PRNewswire-iReach/ -- Atmospheric Water Generators are now available through Molecule New Water technologies, the company has announced. The units the company provides are able to make water directly from the air, enabling consumers to have access to clean and pure water in a convenient and environmentally friendly way. The availability of Atmospheric Water Generators, along with many of the other products offered by Molecule New Water Technologies, provide solutions to issues relating to water shortages, contaminated water sources and more.
Logo - http://photos.prnewswire.com/prnh/20141030/155609LOGO The company offers both small and large models of these systems, comparing the smaller model's function to that of a water fountain. This smaller model condenses humidity and turns it into water, storing it until it is needed for use. The larger model functions in a similar way, and it is considered ideal for whole home use or for industrial applications in which clean and pure water is needed.
"It gives us great pleasure to be able to provide Atmospheric Water Generators to individuals and businesses across the country," said Eliot Harris, the president and owner of Molecule New Water Technologies. "These generators are a vital tool for those who are looking for a clean and pure water system that is environmentally responsible."
Harris pointed to the numerous issues that affect the quality of drinking water that is available, including contamination of water sources due to fracking and industrial pollution. There are other issues at play in many places as well, as numerous locations across the United States have had to deal with droughts that lessen the availability of water. Through the use of Atmospheric Water Generators, it is possible to alleviate a number of the problems caused by a lack of available clean water.
These units feature a 12-stage filtration system, which ensures that the water stored in the unit is properly filtered for use across a variety of applications. The company ships these units throughout the United States, and they are also available at the company's retail store in Marble Falls, Texas. Media Contact: Brandon Hopkins, AfterHim Media LLC, 559-871-1613, [email protected] News distributed by PR Newswire iReach: https://ireach.prnewswire.com
SOURCE Molecule New Water Technologies Related Links
http://www.moleculewatertech.com
Business, Science & Tech, Health & Living, Policy & Public Interest
Retail, Utilities, Environmental Products & Services, New Products & Services More | 科技 |
2016-40/4923/en_head.json.gz/12011 | High Speed Fail: West Virginia only has state officials to blame
Editorials Dec 4, 2013
It should have come as no surprise that federal officials are not going along with a proposal to spend $2.5 million for another high-speed Internet project in West Virginia. Mismanagement of other broadband initiatives left little reason for confidence in this one.
West Virginia already has burned through most of the $126.3 million “stimulus” grant the state received to expand access to high-speed Internet service. Money from the grant not spent by Dec. 31 has to be returned to Washington.
Finding about $2.5 million left in the account, state officials solicited proposals for one last project. They decided to go along with a plan by Citynet, based in Bridgeport, W.Va., to give the state direct connections to the national Internet “backbone.” Theoretically, that would allow higher Internet speeds for many Mountain State residents and businesses.
But federal officials have declined to approve the project, for many reasons. Among them was mismanagement of a program to buy hundreds of new computer network routers for public facilities in West Virginia. Millions of dollars were wasted when unnecessarily expensive, complex routers were purchased.
Also cited by the National Telecommunications and Information Administration in rejecting the new state plan were a lack of letters of support for the Citynet proposal, an $880,000 shortfall in the plan’s budget and questions about how the project could be completed by Dec. 31.
State officials still hope to answer the NTIA objections and spend the $2.5 million. But agency officials have every reason to be leery of the plan, in view of mismanagement that has plagued the Mountain State’s broadband expansion program. If the money is lost, it will be a shame – but West Virginians will have no one but state officials to blame. | 科技 |
2016-40/4923/en_head.json.gz/12015 | Vol. 41, No. 5, Jul., 1996
Effects of Climate W...
Effects of Climate Warming on Fish Thermal Habitat in Streams of the United States
John G. Eaton and Robert M. Scheller
Vol. 41, No. 5, Freshwater Ecosystems and Climate Change in North America (Jul., 1996), pp. 1109-1115
The effects of climate warming on the thermal habitat of 57 species of fish of the U.S. were estimated using results for a doubling of atmospheric carbon dioxide that were predicted by the Canadian Climate Center general circulation model. Baseline water temperature conditions were calculated from data collected at 1,700 U.S. Geological Surveys stream monitoring stations across the U.S. Water temperatures after predicted climate change were obtained by multiplying air temperature changes by 0.9, a factor based on several field studies, and adding them to baseline water temperatures at stations in corresponding grid cells. Results indicated that habitat for cold and cool water fish would be reduced by ∼ 50%, and that this effect would be distributed throughout the existing range of these species. Habitat losses were greater among species with smaller initial distributions and in geographic regions with the greatest warming (e.g. the central Midwest). Results for warm water fish habitat were less certain because of the poor state of knowledge regarding their high and low temperature tolerance; however, the habitat of many species of this thermal guild likely will also be substantially reduced by climate warming, whereas the habitat of other species will be increased. | 科技 |
2016-40/4923/en_head.json.gz/12044 | GALLERIES > BIRDS > PASSERIFORMES > PYCNONOTIDAE > BROWN-EARED BULBUL [Ixos amaurotis]
The Brown-eared Bulbul (Microscelis amaurotis) is a medium-sized bulbul which is found from the Russian Far East (including Sakhalin), northeastern China, the Korean Peninsula, and Japan, south to Taiwan and the Babuyan and Batanes island chains in the north of the Philippines, occasionally being found in Luzon. It is extremely common within the northern parts of its range, and is a familiar bird throughout Japan, where it is called hiyodori. However, in Taiwan it is rare and limited to Orchid Island 1.
This species was long placed in the genus Hypsipetes, at that time an indiscriminate assemblage of more or less related bulbul species. Later, its distinctness was recognized and it was variously placed in the genus Ixos or given a genus of its own, Microscelis. Analysis of nuclear and mitochondrial DNA sequences suggests that the latter is more appropriate (Moyle & Marks, 2006).
Historically, Brown-eared Bulbuls are migratory birds moving to the southern parts of its range in winter, but they have taken advantage of changes in crops and farming practices in recent decades to overwinter in areas farther north than previously possible. As a result, they are considered agricultural pests in some areas of Japan. Most Brown-eared Bulbuls still move south in winter, often forming huge flocks during migration. In summer, Brown-eared Bulbuls primarily feed on insects, while they primarily take fruits and seeds in the fall and winter.
Reaching a length of about 28cm, Brown-eared Bulbuls are grayish-brown, with brown cheeks (the "brown ears" of the common name) and a long tail. While they prefer forested areas, they readily adapt to urban and rural environments, and their noisy squeaking calls are a familiar sound in most areas of Japan. | 科技 |
2016-40/4923/en_head.json.gz/12138 | LifeSize Founder Craig Malloy Rejoins LifeSize as Chief Executive Officer
NEWARK, Calif. — Feb. 13, 2014 and LAUSANNE, Switzerland, Feb. 14, 2014 — Logitech (NASDAQ: LOGI) (SIX: LOGN) today announced that LifeSize founder Craig Malloy has rejoined the company as chief executive officer of LifeSize and senior vice president of Logitech. Malloy is a leader in the video conferencing industry who founded LifeSize in 2003, oversaw its acquisition by Logitech in 2009 and served as LifeSize’s CEO until 2012. Over the past year, Malloy has continued to work closely with the LifeSize and Logitech leadership teams as vice chairman of LifeSize’s supervisory board. “We welcome Craig’s experience and leadership abilities in the video conferencing industry to steer LifeSize at this transformative time and accelerate its ability to bring innovative video solutions to customers of all sizes,” said Bracken Darrell, Logitech president and chief executive officer. “Under his leadership, LifeSize is well-positioned to win in this changing market place.”
“LifeSize’s legacy as a pioneer and world leader in video collaboration is one of the crowning achievements of my career, and I am thrilled to be back to lead us in the right direction and build upon our proven success,” said Malloy. “The guiding principle on which LifeSize was founded was one of relentless innovation, and I aim to reinvigorate LifeSize with that inventive, audacious spirit once more.”
About LifeSize
LifeSize is a pioneer and world leader in high-definition video collaboration. Designed to make video conferencing truly universal, our full range of open standards-based systems offer enterprise-class, IT-friendly technologies that enable genuine human interaction over any distance. Founded in 2003 and acquired by Logitech in 2009, LifeSize, with its commitment to relentless innovation, continues to extend the highest-quality video conferencing capabilities to anyone, anywhere, where there is Internet access. For more information, visit http://www.lifesize.com.
This press release contains forward-looking statements within the meaning of the federal securities laws, including, without limitation, statements regarding: LifeSize's ability and timing to bring innovative video solutions to a wide range of customers; LifeSize's ability to win in the video conferencing market place. The forward-looking statements in this release involve risks and uncertainties that could cause Logitech’s actual results and events to differ materially from those anticipated in these forward-looking statements, including, without limitation: the demand of our customers and our consumers for our products and our ability to accurately forecast it; if we fail to innovate and develop new products in a timely and cost-effective manner for our new and existing product categories; the effect of pricing, product, marketing and other initiatives by our competitors, and our reaction to them; if our products and marketing strategies fail to separate our products from competitors’ products. A detailed discussion of these and other risks and uncertainties that could cause actual results and events to differ materially from such forward-looking statements is included in Logitech’s periodic filings with the Securities and Exchange Commission, including our Quarterly Report on Form 10-Q for the fiscal quarter ended December 31, 2013 and our Amended Annual Report on Form 10-K/A for the fiscal year ended March 31, 2013, available at www.sec.gov, under the caption Risk Factors and elsewhere. Logitech does not undertake any obligation to update any forward-looking statements to reflect new information or events or circumstances occurring after the date of this press release. | 科技 |
2016-40/4923/en_head.json.gz/12301 | http://www.mysanantonio.com/news/article/On-Aging-The-bigger-they-are-the-older-they-grow-3462503.php
On Aging: The bigger they are, the older they grow
By Steven Austad
Updated 10:55 am, Monday, April 9, 2012
Sometimes we recognize patterns in the world so intuitively that we don't appreciate their profundity. Throw a mouse off a 10-story building, for instance, and it will walk away unhurt (depending somewhat on whether it lands on concrete or dirt). Do the same for a dog, it will be killed. A horse will splat. (Now don't try this experiment at home. Take my word for it.)
We don't need to know the laws of physics to anticipate the results, but if we thought about this pattern for a few minutes it might tell us something about the laws of gravity, about the nature of air resistance and about how the world is a very different place depending on what size you are. You won't find horse-size animals scrambling around treetops, but you will find lots of mice-size ones.
Size also affects how fast things age. Again, the pattern is familiar. That hamster you had when you were a child, as well as its babies, their babies, their babies and several more generations, will have long since hopped away to hamster heaven by the time you became an adult. Your childhood dog, on the other hand, may have grown a bit slow and slightly decrepit but was still likely to be alive when you came home from college or boot camp.
Had you owned a pet elephant, on the other hand (again, don't try this at home), it would not have changed much between your childhood and the time you bought your first legal drink. Big species age more slowly and live longer than small species.
Does this mean that really big species, like whales, live even longer than we do? This is not an easy question to answer. Most of what we know about the longevity of other species comes from pets or zoos and rely on dependable birth records. Except for a few small whales such as SeaWorld's Shamus, whales live in the wild, and wild whales' birth certificates are not as accurate as we might like.
But biologists have other clever ways of estimating an animal's age. In the case of the bowhead whale — the second-largest whale on the planet — researchers estimated their age by analyzing certain proteins in the eyes of animals killed during Eskimo hunts.
The surprising results: Nearly 10 percent of the animals studied were estimated to be more than 150 years old; the oldest was estimated to have been killed at the extremely ripe age of 211 years.
To put this in perspective, only about one in 5,000 Americans today lives to be 100 years, and the current oldest person in the world is about 114 years old.
Other whale researchers were skeptical about these age estimates. But over the past few years, a variety of traditional stone and ivory harpoon tips have been recovered from the bodies of recently deceased whales. Comparing these implements to collections in the Smithsonian Institution, suggests they could have been as old as a few hundred years and no more recent than about 120 years. Suddenly, a 200-year-old whale did not seem so far-fetched.
But how does such extreme longevity fit with the bowhead whale's size? Quite well, it turns out. With a little mathematical tinkering, you can draw a straight line through a graph of animal size versus animal longevity, and most species hug that line quite closely. If you are a mammal weighing about 200,000 pounds like the bowhead whale, this line predicts you should live about 200 years. The pattern persists.
Wait a second — don't humans live a lot longer than expected for a species our size? That's another interesting question — and the topic of my next column.
Steven Austad is a professor and interim director for the Barshop Institute for Longevity & Aging Studies at the UT Health Science Center San Antonio. His column appears every other Sunday in S.A. Life.
[email protected] | 科技 |
2016-40/4923/en_head.json.gz/12372 | HomeEnterprise I.T.Mobile TechApplicationsHardwareWorld Wide WebNetwork SecurityCloud ComputingMicrosoft/WindowsApple/MacPersonal TechBig DataCRM SystemsGovernmentCommunicationsContributed Content You are here: Home / Cloud Computing / Facebook Rolls Out Friend Tracker
Facebook Rolls Out Opt-In Friend Tracker
By Seth Fitzgerald / NewsFactor Network
Social media giant Facebook is rolling out its new “Nearby Friends” feature that will tell users how close they are to people on their friends list. This feature, which carries with it a variety of privacy concerns, will not be turned on by default. Unlike other additions to the social network that were turned on upon launch, Nearby Friends is entirely optional.
The feature has been in the works for a while and Facebook executives have been talking about location tracking features for many years. Typically, when Facebook adds a major feature to its site, it is doing so with advertising in mind. With Nearby Friends, the social network has already stated that it will eventually use the information gathered from tracking users to enhance its advertising options. A Useful Feature
Many of Facebook’s features are easy to set up, but that is not necessarily the case with Nearby Friends. The new service comes with a wide variety of options, most of which exist to combat privacy concerns, particularly those dealing with other users and not the site itself. Social networks are already used by malicious individuals who can rely on updates and location information to stalk other people. By adding a real-time location tracker to Facebook, the potential for criminal abuse is even higher. Luckily, users who choose to enable the feature must be at least 18 years old and after it is turned on, they have multiple privacy-centric options at their disposal. By default, Nearby Friends will only broadcast the general location of a user. Therefore, people will only know what city other people are in as well as how far away they are. Users have the ability to chose who can see their locations as well, by narrowing down their friends list to “close friends” or even a specific list of people. In situations where it is beneficial for one user to know another user’s exact location, the option to enable detailed location sharing is available and can be adjusted to remain in effect for a certain time period. Money Maker
One aspect of Nearby Friends has already provided some insight into how Facebook will use the service once it becomes mainstream and the roll out is completed. The component in question is the location log. Users who turn on Nearby Friends have no choice but to allow the network to create a list of a person’s previous locations. Even though they have some control over this component -- users can manually delete individual locations -- Facebook has admitted that the feature will eventually be used for advertising purposes. A company spokesperson told TechCrunch that “at this time [Nearby Friends is] not being used for advertising or marketing, but in the future it will be.”
We caught up with Jeffrey Chester, Executive Director of the Center for Digital Democracy, to get his view on the new tracking feature. He told us that Facebook will encourage people to enable the services so that advertisers can benefit from the extra information.
"Facebook is expanding its close tracking of ourselves and our friends in order to sell us to their advertisers," says Chester. "Facebook users should not allow their location to be sold to the highest bidder -- and accessible to people who want to spy on our activities."
Read more on: Facebook, Privacy, Social Network, Marketing, Social Media | 科技 |
2016-40/4923/en_head.json.gz/12389 | Q & A for NHC - Colin McAdie
Meteorologist, Technical Support Branch
By Dennis Feltgen, NOAA NHC Public Affairs Officer
So many people in meteorology got the weather bug at a young age. What about you?
Yes. One of my earliest childhood recollections is being outside in my front yard in Baltimore watching the clouds moving very fast, and the wind blowing the trees. My Mom came to the door and told me to come inside, that there was a hurricane coming! I relented, but really wanted to stay outside to see what was happening. Well, that was Hurricane Hazel in 1954.
The Mid-Atlantic States do have quite a bit of weather.
There is a lot of variety. I also remember the occasional big late winter snowstorms when I was a kid, and I would listen to the radio to learn if my county was closing school because of the heavy snow. That was always fun. Not of course for the adults. I think I figured that anything capable of closing school for a day or two was worthy of some scientific investigation!
So you kept the weather interest growing up?
Yes, but I was always more fascinated with why something was occurring in weather, rather than just what was occurring. I ultimately discovered that meteorology was the answer to "why" and that eventually led to a Masters in Meteorology from Florida State University.
Subsequently, I went out to Boulder, Colorado, as part of PROFS (Program for Regional Observing and Forecasting Services) – it became FSL (Forecast Systems Laboratory) and is now known as GSD (Global Systems Division). I stayed with that for four years, working on a suite of severe weather algorithms. These became part of the processing on the system that we now call the 88-D. It was the next generation radar, NEXRAD, a Doppler radar that would replace the aging WSR-57 and -74 radars. Then an opening came up at the National Hurricane Center in the technical support branch, and I have been here ever since. My family has been up in central Florida for a long time, so that was an additional motivation.
Radar does not seem to get the attention that satellite pictures do.
It really depends on the situation. I think you have observed that during a landfall, radar undergoes a rapid increase in popularity. It really gives you a feeling for storm structure that is tough to beat. We not only get location, of course, but also the velocity data. The spatial mapping of the velocity field, especially in the core, is a great way to find the maximum velocity.
What's the biggest leap in radar technology that you've seen?
Two things. One is the display. Twenty years ago we might be lucky to get a single image in some cases. Connectivity was really painful. Today we're getting an almost continuous stream of data with a fast network connection, cursor read-out of high resolution data, it's great.
The other thing is the fairly recent availability of the actual digital data, known as the "level 2 data". This was a long time coming, but we finally have the technology. Having level 2 in real-time finally lets us do some data processing locally to get some answers specific to our problem, that is, tropical cyclones.
What's on the immediate horizon?
The next step is the dual polarization radar, which if things go according to schedule will just start to become available next year. It will be an add-on feature to the current 88-D, a retrofit. The basic idea is that the dual-pol gives you a measurement of what phase of water you are looking at, in other words, is it water, is it ice? This is being billed as an improvement in precipitation estimates primarily, which it fine, but I also have a suspicion that it might give us a handle on intensification. It's an intriguing possibility.
Anything after that?
After that, there is something called "Nexrad-in-space". There are some technological hurdles, but this would give us essentially a geostationary 88D, with potentially hourly radar snapshots covering the whole Atlantic basin. We could see systems coming off Africa, for example, or in other areas that are beyond the routine reach of the aircraft or other platforms. It would give us a much better idea of what is out there.
What is the best part of your job?
For me, believe it or not, it's getting a data set, setting up a problem or question that needs to be answered, and then doing some programming to get there. There's something fun about turning the crank and having an answer drop out – not really that easy, of course, but that's the idea.
You've been doing this for quite a while. What's next?
Well, that's one I don't really have an answer for. I'm just enjoying what I do now.
Send comments to: [email protected]
Return to Q & A index of stories | 科技 |
2016-40/4923/en_head.json.gz/12474 | Language: EN English
Home / About OSA / Newsroom / In Memoriam Kevin P. (Rolland-) Thompson
In Memoriam: Kevin P. (Rolland-) Thompson
Kevin P. (Rolland-) Thompson, Ph.D., an OSA Fellow and active OSA member having served as a conference speaker and committee member, passed away on 20 November 2015. Thompson was the Group Director, Research and Development/Optics at Synopsys, Inc. Thompson was known for leading breakthroughs in the understanding of the aberration fields of a new class of truly nonsymmetric optical systems using freeform optical surfaces.
Thompson received his Ph.D. in Optical Sciences from the University of Arizona, College of Optical Sciences in 1980, where he worked with Professor Roland V. Shack on optical aberrations for optical systems without symmetry. Thompson’s undergraduate work was done at the University of Minnesota, Institute of Technology in the areas of astrophysics and physics.
Thompson was an active OSA volunteer having served as the Co-Chairman of the OSA Topical Meeting on Freeform Optics, and as a member of the OSA Fellows Selection Committee and of the Board of Meetings. He also served as Co-Chairman on the OSA Freeform Optics Incubator, as a Topical Editor of JOSA A, Geometrical Optics, and Chairman of the Optical Design Technical Group. He was a member of AAAS, IEEE, SAE, SID, and was an SPIE Fellow.
Most recently he was presented the 2015 Alumnus of the Year in Optical Sciences award by the University of Arizona in recognition of his significant contributions to the field of optical sciences and engineering, particularly in optical systems development. Thompson also received The Conrady Award for contributions to Optical Aberration Theory in 2013 from SPIE.Thompson was widely published and held three US patents.
Thompson is survived by his wife and fellow researcher, Jannick Rolland, and will be deeply missed by the community.
Tributes to Kevin Rolland-Thompson
“Very sad to hear this news. Kevin was a driving force in the OSA meetings portfolio. The focus, consideration and true commitment he displayed for his colleagues, his volunteer efforts and his Society was remarkable. I will miss seeing Kevin walking down the halls at meetings, wearing his hat and sunglasses, surrounded by people…smiling like he was having the time of his life.” Chad Stark, OSA Deputy Executive Director/Chief Meetings Officer
30 September 2016 College of Optical Sciences Tucson, Arizona
10 October 2016 Chongqing Institute of Green and Intelligent Technology (CIGIT), Chinese Academy of Sciences (CAS) Beibei District, China | 科技 |
2016-40/4923/en_head.json.gz/12494 | TOPICS > Science Reports Suggest BP Cut Corners on Safety, Design of Gulf Rig
June 21, 2010 at 12:00 AM EDT [Sorry, the video for this story has expired, but you can still read the transcript below. ]
http://www.pbs.org/newshour/rss/media/2010/06/21/20100621_oil1.mp3SEE PODCASTS
GWEN IFILL: Now to the oil spill.
A judge in New Orleans says he will decide by Wednesday whether to overturn a temporary moratorium on new deepwater drilling projects. The ban, imposed in the wake of the Deepwater Horizon disaster, is being challenged by a company that ferries people and supplies to offshore rigs.
But, as courts look into the aftermath of the explosion, new questions are being raised about what caused it in the first place.
As the spill continues well into its third month, new reports surfaced today suggesting BP cut corners on safety and understated the amount of oil flowing into the Gulf. In one BP document given to Congress in early May and made public over the weekend, the company estimated a flow rate of 100,000 barrels a day, far above the 60,000 a day the government has estimated.
Massachusetts Democrat Ed Markey, chairman of the House Energy and Environment Subcommittee, said BP withheld the higher estimate intentionally.
REP. EDWARD MARKEY, D-Mass.: BP has either been lying or grossly incompetent from day one. I think that they have been trying to limit their liability.
GWEN IFILL: BP, however, said the higher estimate was a worst-case scenario that would only occur if the blowout preventer was completely removed, something they said they have no plans to do.
Coast Guard Admiral Thad Allen, the top U.S. oil spill official, said today experts are still trying to determine how much oil is leaking from the well. Exact numbers, he said, will not be available until a new containment metered system is installed next month.
Additional reports from The New York Times and the BBC also raised questions today about how much BP knew of existing risks that could have compromised the blowout preventer designed to seal the well.
TYRONE BENTON, Deepwater Horizon rig worker: We saw a leak on the pod. So, by seeing the leak, we informed the company. They have a control room where they could turn off that pod and turn on the other one, so that they don’t have to stop production.
QUESTION: So, they found a problem, and, instead of fixing it, they just shut down the broken bit?
TYRONE BENTON: Yes. They just shut it down and worked off another pod.
GWEN IFILL: The new revelations have continued to undermine BP’s credibility among Gulf Coast residents.
MAN: I’m outraged, but, more than anything, I’m disheartened about it. If they’re — if you can’t trust them with a statement like that, then the statement that they’re going to clean it up, you can’t trust that either.
GWEN IFILL: That anger received new fuel this weekend when photos surfaced of BP’s chief executive Tony Hayward at a yachting race off the coast of England.
CRAIG BIELKIEWIC, fishing boat captain: So, he got his life back. Or I guess he never lost his life. So, I guess we’re still working on ours.
NARRATOR: From the heart of the nation’s capital, “This Week.”
GWEN IFILL: White House Chief of Staff Rahm Emanuel also echoed that sentiment in a Sunday morning interview.
RAHM EMANUEL, White House chief of staff: Tony Hayward, he has got his life back, as he would say. And I think we can all conclude that Tony Hayward is not going to have a second career in P.R. consulting. This has just been part of a long line of P.R. gaffes and mistakes.
GWEN IFILL: Last Friday, the company announced Bob Dudley, an American BP executive, would take over its oil spill response operation, but Hayward remains on the job.
BP says it’s already paid out $105 million to 32,000 claimants, and has spent $2 billion trying to contain and clean up the oil. But that hasn’t stopped it from hitting Gulf Coast beaches, causing Florida’s Tourism Council to mount a special appeal to would-be vacationers.
NARRATOR: Two-hundred and twenty-one miles, that’s a lot of beach to choose from.
GWEN IFILL: Governor Charlie Crist:
GOV. CHARLIE CRIST, I-Fla.: Most of the beaches in Florida are pristine, are unimpacted by this event. The vast majority of them are. And the reality is, when tar balls come up on the beach, so long as we’re able to get the personnel there in a timely fashion, they can be cleaned up fairly quickly.
We can’t mislead people, but it is not misleading to say to any potential tourist that our beaches, the vast majority of our beaches, are beautiful. The water is clean. The fish are biting. And they can go scalloping.
GWEN IFILL: BP declined to confirm reports today it plans to raise $50 billion to cover the cost of the spill.
bp british petroleum Deepwater Horizon Environment gulf of mexico house judiciary offshore drilling oil leak oil spill rig | 科技 |
2016-40/4923/en_head.json.gz/12544 | Everything Everywhere not ready to launch 4G network in September
Rik Henderson21 August 2012Phones
Everything Everywhere has admitted that it won't be ready to launch its 4G service on 11 September, the date it is licensed to do so by Ofcom. Instead, a company source has said that an October unveiling is more likely.
Speaking to Sky News, the "well-placed" source said that Orange and T-Mobile would be ready to roll out their new superfast mobile broadband services later than many are currently reporting. Which would also mean that, should Apple add support for the 1800MHz spectrum band Everything Everywhere will be using to the forthcoming iPhone 5, as Pocket-lint's own sources have suggested, owners won't be able to use such services from day one.
It is currently and widely reported that the new iPhone 5 will be announced by Tim Cook on 12 September and sold from 21 September. If these reports are true - and many claim them to be - the UK's first 4G network will still be at least two weeks away from being accessible.
Of course, as the phone will have also 3G support, that doesn't mean buyers will be restricted to Wi-Fi only, but it will be interesting to see how the phone will be marketed in the weeks before 4G is actually ready. | 科技 |
2016-40/4923/en_head.json.gz/12588 | Standard Process Scientist Authors Superfood Book Chapter
Book is part of the American Chemical Society’s Symposium series
Brandon Metzger, Ph.D., manager of discovery science at Standard Process Inc., contributed a chapter on carrots in a published ACS Symposium series book.
This is an exciting field of research that helps clarify why certain ‘superfoods’ offer superior health benefits.
Palmyra, WI (PRWEB)
Brandon Metzger, Ph.D., manager of discovery science at Standard Process Inc., is the author of a chapter in the book, “Emerging Trends in Dietary Components for Preventing and Combating Disease,” published recently by the American Chemical Society (ACS). ACS is an organization for professionals in the field of chemistry and the largest scientific society in the world. The book is part of the ACS’ Symposium series. Books in the series are peer-reviewed and present a comprehensive view of current research on topics related to chemistry.
Metzger’s chapter, “Carrot Bisacetylenic Oxylipins – Phytochemicals Behind the Mask of the Superfood,” discusses how the label of a “superfood” is bestowed, the role of carrots in our modern diet, and the research related to their bioavailability and support of healthy function. “This is an exciting field of research that helps clarify why certain ‘superfoods’ offer superior health benefits,” says Metzger. His chapter details how polyacetylenes and other plant-defense compounds present in low amounts confer health benefits, a concept known as hormesis. “Research in the field of nutrition is full of examples of hormetic compounds such as isothiocyanates in brassicas (broccoli, kale, Brussels sprouts) and polyphenols in a wide range of plants,” explains Metzger. “These compounds protect the plant from insects, molds, bacteria, and other pests by being toxic while at the same time providing human health benefits at a lower dose due to the fact that we’re bigger than the plants.” Metzger also discusses his prior research on polyacetylenes. He details information like how to analyze them and factors that impact polyacetylene levels and stability. Visit standardprocess.com to learn more about this and other research being done by Metzger and the discovery science team at Standard Process. The discovery science team analyzes products and raw materials for research and development use. This group also works with staff at the Standard Process certified organic farm to increase yield and refine methods, with the production department to evaluate and institute manufacturing improvements, and with the quality control department to assist with test and process validation.
About Brandon Metzger, Ph.D., Manager of Discovery Science
As manager of discovery science, Metzger splits his time between working in the lab, collaborating with external experts, and directing the activity of the discovery science team. Metzger is the in-house expert and trainer on supercritical-fluid-extraction processes. Metzger holds a Ph.D. in nutritional science from the University of Wisconsin-Madison. His undergraduate degree is in biochemistry.
About Standard Process Inc.
For more than 80 years, Standard Process, headquartered in Palmyra, Wis., has provided high-quality, nutritional whole food supplements through health care professionals. Standard Process offers more than 300 products through three product lines: Standard Process whole food supplements, Standard Process Veterinary Formulas and MediHerb herbal supplements. The products are available only through health care professionals. Standard Process is involved in every step of production. The company grows crops on company-owned, organically certified farmland, utilizes state-of-the-art manufacturing processes, and employs the highest quality control standards. Standard Process strictly adheres to the Food and Drug Administration’s good manufacturing practice requirements. Through these measures, Standard Process can ensure that its products are of the utmost quality and potency. Standard Process was named a Top 100 Workplace in Southeastern Wisconsin and is a member of the Inc. 5000 Honor Roll.
For additional information about Standard Process, visit standardprocess.com.
Karren Jeske, APR
Standard Process Inc. (262) 495-6382 | 科技 |
2016-40/4923/en_head.json.gz/12657 | Want people to think you're smarter? Smile
New research offers good news and bad news for the homely among us. First, the good news: People can't tell how smart you are by how good you look. The bad news? They think they can.
As reported last month in the journal Plos One, researchers had 40 men and 40 women take a standard intelligence test. Then they photographed the subject's faces, instructing them "to adopt a neutral, non-smiling expression and avoid facial cosmetics, jewelry, and other decorations."
Next, 160 strangers reviewed the photographs. Half of the reviewers rated the photos according to how smart the subjects looked, while the other half rated them according to the subjects' attractiveness.
The researchers found a strong relationship between how attractive people thought a person was and assumptions about their intelligence: The higher the attractiveness rating, the higher the rating for smarts. This relationship was particularly strong when the subjects were female.
But the connection between perceived intelligence and actual intelligence was much less clear. Indeed, there was a significant gender gap: Reviewers did pretty well at guessing the actual intelligence of men, but they were completely lost when trying to identify smart women.
Researchers surmised that judging women on their intelligence — rather than their attractiveness — may just not be something people practice very much: "The strong halo effect of attractiveness may thus prevent an accurate assessment of the intelligence of women."
But it gets weirder: When researchers compared the attractiveness ratings for various subjects with their IQ scores, they found no relationship whatsoever. This suggests that there is absolutely no connection between brains and beauty. But assumptions about a person's intelligence seem be based largely on stereotypes related, at least in part, to notions of attractiveness.
To probe this idea further, the researchers constructed "intelligence stereotypes" for both men and women, using the photographs reviewers had rated by level of intelligence.
"Our data suggest that a clear mental image [of] how a smart face should look does exist for both men and women within the community of human raters," the researchers concluded. ". . . In both sexes, a narrower face with a thinner chin and a larger prolonged nose characterizes the predicted stereotype of high-intelligence, while a rather oval and broader face with a massive chin and a smallish nose characterizes the prediction of low-intelligence."
These assumptions carry centuries of cultural baggage. More to the point, they're simply wrong. The researchers found no relationship between these facial stereotypes and a person's actual intelligence.
"Men and women with specific facial traits were perceived as highly intelligent," the researchers concluded. "However, these faces of supposed high and low intelligence probably represent nothing more than a cultural stereotype because these morphological traits do not correlate with the real intelligence of the subjects."
It's worth noting that the study was conducted in the Czech Republic, which is overwhelmingly white and troubled by violence against the Roma minority. The researchers say nothing about the race or ethnicity of their subjects or the reviewers.
So where does this leave us? While it's comforting to know that there's no real connection between brains and beauty, we nonetheless form opinions of each other as if there were. This can have measurable, real-world consequences.
Daniel Hamermesh, an economist at the University of Texas at Austin, wrote in an August 2011 New York Times op-ed that being attractive "helps you earn more money, find a higher-earning spouse (and one who looks better, too!) and get better deals on mortgages." All told, he wrote, the lifetime earnings difference between people at opposite ends of the attractiveness spectrum averages out to about $230,000, in beauty's favor.
Finally, the research suggests one thing everyone can do to boost others' assessment of our intelligence: Smile more.
"There also seems to be a correlation between semblances of emotions of joy or anger in perceptions of high or low intelligence in faces, respectively," the researchers write. "The 'high intelligence' faces appear to be smiling more than the 'low intelligence' faces. | 科技 |
2016-40/4923/en_head.json.gz/12701 | Firefly’s Phil Szomszor says there’s no “perfect curve” in B2B social media - Opinion - PRmoment
prmoment.com
Five arguments for going digital when doing business-to-business PR by Phil Szomszor, head of business and digital at PR agency Firefly.. When I think of PR social media gurus, I imagine Siobhan Sharpe from the BBC comedy Twenty Twelve delivering her web strategy for the Games. In her view, Myspace was the best channel because it has the fewest number of people using it, and therefore is the fastest growing and most exciting. She also highlighted that social media during the Games wasn’t all about the sport, but public opinion about athletes and “all aspects of them”.. It’s not surprising that the PR industry was lambasted in this way – there are a hell of a lot of people making claims about social media that just can’t be supported and I’ve heard more than the occasional “perfect curve” quote from so-called gurus. It’s in the world of B2B PR that this anti-social media attitude is most prevalent. And while I agree that there’s a lot of smoke and mirrors with social media, that doesn’t mean to say that it should be dismissed altogether – in fact, I’d argue the future of B2B PR is digital.... [Here's a good argument for why digital PR is the future for PR ~ Jeff]
Do you use your smartphone or tablet to post images to social media? Looking for mobile image apps to e | 科技 |
2016-40/4923/en_head.json.gz/12719 | http://www.seattlepi.com/business/article/Gates-slips-up-reveals-when-next-Xbox-will-ship-1172363.php
Gates slips up, reveals when next Xbox will ship
By TODD BISHOP, SEATTLE POST-INTELLIGENCER REPORTER
Published 10:00 pm, Monday, May 2, 2005
Microsoft Corp. has been highly secretive about whether it will release its next Xbox video-game console this year or next. Apparently, Bill Gates didn't get the memo.
Speaking yesterday to a national convention of business editors and writers, the Microsoft chairman seemed to confirm widespread industry speculation about the timing of the release, referring to plans to ship the console "this year." The remark, apparently inadvertent, came as Gates talked about consumer adoption of high-definition displays, which Microsoft is incorporating into its strategy as it takes on market leader Sony Corp. and its dominant PlayStation franchise.
"What will the year of high-definition be?" Gates asked, rhetorically. He then answered the question by saying that it would be tempting to think of it as "this year, because we're going to ship this next Xbox."
A representative of the Xbox team declined to comment on Gates' remark. Microsoft plans to announce details and timing for the next Xbox during a TV special next week.
Gates made the statement during the annual convention of the Society of American Business Editors and Writers in Seattle, where he gave a speech and took questions on stage from BusinessWeek's Jay Greene. The one-on-one format of that Q-and-A session bothered some of the organization's members. Although Greene drew in part on written questions submitted by the audience, some journalists in the group said it was unusual for a speaker at the SABEW conference not to take impromptu questions from the floor.
In an earlier session, for example, SABEW members were able to question Al Hubbard, assistant to President Bush on economic issues and director of the National Economic Council, on a range of issues.
Microsoft spokeswoman Tami Begasse said the company collaborated with SABEW to come up with the Q-and-A format. She said idea was to let someone who covers the company and is familiar with it ask the questions, to ensure that they were relevant. Greene was able to ask any questions he wanted, including those from the audience and his own.
But Myron Kandel, CNN's founding financial editor and a former SABEW president, said he was disappointed in the format. He noted that none of the questions focused on two of the more newsworthy topics surrounding Microsoft -- the company's decision to take a neutral stand on Washington's gay rights legislation and Gates' recent statements about the need for education reform.
Although the questions didn't delve into those issues, Gates discussed a range of topics in his speech and in response to Greene's questions. A sampling of his comments: On the decision to switch from awarding employees stock options to giving them actual Microsoft shares: "We probably never should have used stock options. I actually regret that we ever used them." On reports that Microsoft is in talks to sell its stake in the MSNBC cable joint venture: "We certainly are very committed to what we're doing with MSNBC. It's a very big thing for us. We are sort of more passive when it comes to the video side. We're always looking at our businesses, but that's been one that's been great for us and worked out very well." On how the next Xbox, code-named Xenon, will be similar in some respects to the company's Media Center PC software: "If you're used that menu, when you use this Xenon, you'll see a menu a lot like that, that lets you get photos, TV, music and all those different things." On positive coverage of rival Apple's new Mac OS X Tiger operating system, which has features Microsoft won't have until 2006 in the next Windows: "Because they're the super-small-market share guy, they get all these statements about them. But I actually thought that was great -- there it was, the general press writing about operating systems." On weblogs in which Microsoft employees speak their mind: "I'd say overwhelmingly it's good. It does raise lots of questions. ... Now you have thousands of spokespeople and being off the cuff is part of the whole charm of the thing." But he said the only real challenge is with executives who start blogs but don't have time to make relatively frequent posts. Gates said he has so far stayed away from starting his own for that reason. Today's Deals | 科技 |
2016-40/4923/en_head.json.gz/12728 | Print Email Font ResizeStudy: 1 in 4 consumers had error in credit reportBy MARCY GORDON AP Business WriterUpdated:
02/11/2013 06:38:03 PM ESTWASHINGTON—One in four consumers found an error in a credit report issued by a major agency, according to a government study released Monday. The Federal Trade Commission study also said that 5 percent of the consumers identified errors in their reports that could lead to them paying more for mortgages, auto loans or other financial products. The study looked at reports for 1,001 consumers issued by the three major agencies—Equifax, Experian and TransUnion. The FTC hired researchers to help consumers identify potential errors. The study closely matches the results of a yearlong investigation by The Columbus Dispatch. The Ohio newspaper's report last year said that thousands of consumers were denied loans because of errors on their credit reports. The FTC says the findings underline the importance of consumers checking their credit reports. Consumers are entitled to a free copy of their credit report each year from each of the three reporting agencies. The FTC study also found that 20 percent of consumers had an error that was corrected by a reporting agency after the consumer disputed it. About 10 percent of consumers had their credit score changed after a reporting agency corrected errors in their reports. The Consumer Data Industry Association, which represents the credit reporting agencies and other data companies, said the FTC study showed that the proportion of credit reports with errors that could increase the rates consumers would pay was small.Advertisement
The study confirmed "that credit reports are highly accurate, and play a critical role in facilitating access to fair and affordable consumer credit," the association said in a statement. Experian, a British company with international operations, also said in a statement the study confirms that consumer credit reports are predominantly accurate. At the same time Experian said it "is not satisfied with this result and we continue to work toward ensuring credit reports are 100 percent accurate." The new U.S. Consumer Financial Protection Bureau has the authority to write and enforce rules for the credit reporting industry. In September the agency began ongoing monitoring of the credit agencies' compliance. It's the first time they have faced such close federal oversight. The CFPB hasn't yet taken any public action against the agencies. However, it is accepting complaints from consumers who discover incorrect information on their reports or have trouble getting mistakes corrected. The agencies have 15 days to respond to the complaints with a plan for fixing the problem; consumers can dispute that response. By contrast the FTC can only take action if there is an earlier indication of wrongdoing. It cannot demand information from or investigate companies that appear to be following the law. —— AP Business Writer Daniel Wagner contributed to this report.Print Email Font ResizeReturn to Top Welcome to your discussion forum: Sign in with a Disqus account or your social networking account for your comment to be posted immediately, provided it meets the guidelines. (READ HOW.) | 科技 |
2016-40/4923/en_head.json.gz/12742 | Call of Duty: Black Ops Sells $360 Million on Launch Day, Is New 'Biggest Entertainment Launch Ever'
Call of Duty: Black Ops sold-through an estimated $360 million in the first 24 hours in North America and the UK alone, Activision has announced, leading the publisher to declare the November 9 launch the new "biggest entertainment launch ever." It had previously bestowed the title upon Infinity Ward's Call of Duty: Modern Warfare 2.
Treyarch's Cold War-era shooter sold through approximately 5.6 million copies in North America and the United Kingdom, according to Activision's internal estimates, while Modern Warfare 2 only managed a paltry 4.7 million copies back in November 2009.
"There has never been another entertainment franchise that has set opening day records for two consecutive years and we are on track to outperform last year's five-day global sales record of $550 million," boasted Activision Blizzard CEO Bobby Kotick.
Activision has also pledged to "find 1,000 jobs for veterans" and is donating $1 million from the sales of Cod Blops to the Call of Duty Endowment, a "non-profit, public benefit corporation" which helps service members with job training and placements.
"This Veteran's Day, we should all take a moment to reflect on the sacrifices that the brave men and women of our military are making in service of our country," said Kotick. "It is a national tragedy to have these men and women put their lives on the line in Afghanistan and Iraq, only to come home to face another battle of finding a new career... I am honored that we are able to use the popularity of Call of Duty to bring much needed attention to this serious issue and assist veterans when they return home."
Call of Duty: Black Ops was released on PC, Xbox 360, PlayStation 3, Wii and Nintendo DS. It hasn't been the smoothest launch, though, as a number of bugs and exploits have been reported, particularly on PC. Activision social media manager Dan Amrich has urged players to report them through the correct channels and act with civility. Chatty
ManiacalMoose
I too have a 5000 series card and I'm not noticing any fps problems in single player. Multiplayer, however, is a mess. ManiacalMoose
Yeah, the sincerity of that statement is questionable when you consider the fact that this is the guy who who said strai... Visit Chatty to Join The Conversation
Software Sales | 科技 |
2016-40/4923/en_head.json.gz/12743 | Shacknews 'Best of 2011' Awards
The Shacknews 'Game of the Year' awards have begun. This year, we've done things a little differently. Here's how we've decided the Best of 2011.
We put 2011 in the history books, and with that the time comes to recognize the best from the impressive collection of videogames released over the year. Before even getting to the nominees, though, we took a good look at the awards themselves. When we brought them back for 2010, we took a pretty standard approach, recognizing best of's for genres and platforms, special awards for unique attributes, and, ultimately, a game of the year. The result created confusion and dissatisfaction over what games qualified for which awards and, despite having so many of them, only calling out a few titles which then won multiple times.For 2011 we're taking a different tack. Rather than dilute things across a wide range, we put everything in contention for the grand prize of Game of the Year. Each member of the team cast a ballot of their top five games of the year, which we used on a weighted scale to determine our top ten games of the year.Starting this week, we'll be working through that list, beginning with five honorable mentions, one each day. We chose not to rank this second half of our top ten because the difference between their scores was rather insignificant. Each of these games found its way onto more than one list, but they didn't receive the same consensus as our top five award recipients.The following week we'll dive into the top five. We'll be awarding fourth, third, second, and first runners-up each day. Finally, the series culminates in the Shacknews Game of the Year, to be named Friday, January 20.While the order might be a source of debate, we're excited about how the awards turned out this year. 2011 offered a variety of excellent games and we feel like these represent the very best of them. There's sure to be a lot to talk about with each selection and the awards as whole when all is said and done. We look forward to being a part of those conversations with you.And with that, let the 2011 Shacknews Game of the Year Awards begin! Hope you enjoy them.Shacknews Game of the Year 2011: The Witcher 2: Assassins of Kings'Best of' First Runner-Up: Portal 2'Best of' Second Runner-Up: Batman: Arkham City'Best of' Third Runner-Up: The Elder Scrolls V: Skyrim'Best of' Fourth Runner-Up: Deus Ex: Human RevolutionHonorable Mention: BastionHonorable Mention: Super Mario 3D LandHonorable Mention: The Legend of Zelda: Skyward SwordHonorable Mention: Saints Row: The ThirdHonorable Mention: LittleBigPlanet 2 Chatty
Shadow13th
Dark souls was sloppy and unpolished. The controls wouldn't work sometimes. The graphics really under delivered. It wasn... meoff
Mortal Kombat > All Other Games. apocreg
Honourable mentions for saints row and the quite boring (to play) LBP but not Witcher or dark souls?? You are just wrong... Visit Chatty to Join The Conversation
Game of the Year 2011 | 科技 |
2016-40/4923/en_head.json.gz/12749 | Classic Cameras; The Canon 7 And The “Dream” Lens; Would You Believe f/0.95? Page 2
The rangefinder was a coincident image type, and no doubt anticipating its use with the Dream Lens at wide apertures, Canon lengthened its base by about half as much again as that on the company's previous rangefinder cameras. But with the meter cell also larger than usual, it got in the way of the rangefinder, a problem that was overcome by cutting a small square aperture out of the cell to accommodate the second rangefinder window.
The Canon 7 also had a different lens mount than other Canons. Like those, it had an internal screw thread that allowed the use of the traditional Canon lenses. But that small screwmount wasn't enough to accommodate the huge diameter and weight of the Dream Lens. So Canon added an external, three-lug bayonet, which worked on a breech-lock principle, similar to that found on the early Canonflexes.
The Canon 7 had a metal focal plane shutter, also similar to that used in the then current Canonflex SLRs, and with the unusual addition of a "T" setting for time exposures as well as the more traditional "B" setting. The curtains were made of stainless steel sheet 0.018mm thick, with a black plastic coating. Top speed was 1/1000 sec and the shutter could be locked when the camera was not in use or when fitting a cable release--an action that could inadvertently trip the shutter.
At the time of its launch, there was a wide range of lenses to fit the Canon 7, from 25-1000mm, although the longer focal lengths had to be used in conjunction with the Mirror Box, which used the same bayonet mount as the Dream Lens, and effectively turned the camera into an SLR.
The Canon 7 body on its own weighed 1 lb, 7 oz, but with the Dream Lens attached the weight went up to 3 lbs, 4 oz. The specification comprised of seven elements in a five-group combination that aimed to kill flare and give perfect edge-to-edge sharpness. That was what Canon claimed, anyway. In practice, photographers back in the '60s soon found that although the lens was outstanding when stopped down a little, it wasn't as sharp as it might be at the wider apertures, especially at the edges, and was rather soft and prone to flare at its maximum setting.
At the time of its launch, some doubt was expressed by some as to whether it was truly an f/0.95 lens. Some reports claimed that it might actually have been closer to f/0.99 in the strictest sense of mathematical calculation. But since the Japan Industry Standards (JIS) allowed error margins of 5 percent, Canon's claim of f/0.95 as a maximum aperture was perfectly legitimate.
In '65, the Canon 7 was upgraded, replacing the big selenium cell meter with a smaller, neater CdS version and adding the accessory shoe that had been missing from the original model. The new camera, which was still sold with the Dream Lens, was called the Canon 7s. Two years later, in '67, a final version, called the Canon 7sZ, with just a small cosmetic change to the rangefinder, appeared for a very short run. But by this time, SLRs were killing the sales of rangefinder cameras for good and neither of these cameras ever enjoyed the popularity of the original model.
The Canon 7 was a truly great camera of its time, but it is tempting to wonder if there really was any serious use for a lens of this type, and whether its introduction was more of a marketing ploy than any real landmark in camera/lens design. Such a lens might have been useful in the world of movie and TV cameras, for which similar optics were made, but as far as 35mm still cameras were concerned, one has to wonder if Canon perhaps produced it for no better reason than to show they could.
A couple of years ago, a Canon 7 in chrome with an f/1.4 or f/1.2 lens would have cost you around $600 for a mint condition camera, plus another $600 for the Dream Lens. However, during the past few years, prices have fallen back and these days, although dealers still try to maintain the higher price levels, a Canon 7 complete with f/0.95 lens can sell on eBay for something more like $700-$800. Deduct about a third from these prices for a camera in good, but used condition. ARTICLE CONTENTS
Just saw one on ebay Submitted by ClivePhoto on March 7, 2013 - 5:06am 7s with Dream lens, starting bid £2750. Ended with no bids. Have prices suddenly risen or is someone desperately optimistic? Item No. 271160902368
Aerial Photography The Leica Lens Saga; An Interview With Peter Karbe The Leica Lens Saga; An Interview With Peter... Tripod Heads: A Buyer’s Guide: Match To Your... Wear Your D-SLR Stress-Free: Custom Straps And... Platypod Pro Review: Quite Possibly The Coolest... How to Sell Wildlife and Nature Photos: Tips For... Air Apparent: Vincent Laforet’s Heart-Stopping... 6 Photographers Capture Same Person But Results... Our 10 Favorite Film Cameras of All Time Our Top 7 Camera Straps and Photo Harnesses For... Pentax K-1 DSLR Review This Amazing 100-Megapixel Time-Lapse Video of LA... Canon EOS-1D X Mark II DSLR Review (Full... Watch This Stunning 4K Timelapse of San Francisco... Our Pick: The 10 Best Lenses for Mirrorless... Galactic Ghosts | 科技 |
2016-40/4923/en_head.json.gz/12779 | BBC Documentary on Census of Marine Life Now Online in HD
The Death of the Oceans, a BBC documentary detailing some of the key findings of the Census of Marine Life, is now freely available to the public as a four-part series on the popular media-sharing site, YouTube. Originally aired in October of 2010 and hosted by Sir David Attenborough, the documentary examines what the Census, a massive, ten-year international research project that studied the distribution, diversity, and abundance of marine life, has to tell us about the future of the world's oceans. Part One
Alfred P. Sloan FoundationMajor Program AreasSloan Research FellowshipsPress RoomGrantees in the NewsPress ReleasesArchivesLogosAbout The FoundationApply for GrantsContact Us | 科技 |
2016-40/4923/en_head.json.gz/12806 | Spectra & Graph
Image Gallery Documentation
Science Animations
IRrelevant Astronomy
El Universo Escondido
Observatory Animations
Spitzer Team Blog
Zoomables
Where Is Spitzer Now?
Lyman Spitzer Jr.
Mission Overview
September 28The Frontier Fields: Where Primordial Galaxies Lurk
September 21'Milky Way Project' Relaunches Citizen Science Website
September 8'Enterprise' Nebulae Seen by Spitzer
August 29NASA Team Probes Peculiar Age-Defying Star
August 25Spitzer Space Telescope Begins 'Beyond' Phase
July 27Loneliest Young Star Seen by Spitzer and WISE
Spitzer Reveals 'No Organics' Zone Around Pinwheel Galaxy
The Pinwheel galaxy is gussied up in infrared light in a new picture from NASA's Spitzer Space Telescope.The fluffy-looking galaxy, officially named Messier 101, is dominated by a mishmash of spiral arms. In Spitzer's new view, in which infrared light is color coded, the galaxy sports a swirling blue center and a unique, coral-red outer ring.A new paper appearing July 20 in the Astrophysical Journal explains why this outer ring stands out. According to the authors, the red color highlights a zone where organic molecules called polycyclic aromatic hydrocarbons, which are present throughout most of the galaxy, suddenly disappear.Polycyclic aromatic hydrocarbons are dusty, carbon-containing molecules found in star nurseries, and on Earth in barbeque pits, exhaust pipes and anywhere combustion reactions take place. Scientists believe this space dust has the potential to be converted into the stuff of life."If you were going look for life in Messier 101, you would not want to look at its edges," said Karl Gordon of the Space Telescope Science Institute in Baltimore, Md. "The organics can't survive in these regions, most likely because of high amounts of harsh radiation."The Pinwheel galaxy is located about 27 million light-years away in the constellation Ursa Major. It has one of the highest known gradients of metals (elements heavier than helium) of all nearby galaxies in our universe. In other words, its concentrations of metals are highest at its center, and decline rapidly with distance from the center. This is because stars, which produce metals, are squeezed more tightly into the galaxy's central quarters.Gordon and his team used Spitzer to learn about the galaxy's gradient of polycyclic aromatic hydrocarbons. The astronomers found that, like the metals, the polycyclic aromatic hydrocarbons decrease in concentration toward the outer portion of the galaxy. But, unlike the metals, these organic molecules quickly drop off and are no longer detected at the very outer rim. "There's a threshold at the rim of this galaxy, where the organic material is getting destroyed," said Gordon. The findings also provide a better understanding of the conditions under which the very first stars and galaxies arose. In the early universe, there were not a lot of metals or polycyclic aromatic hydrocarbons around. The outskirt of the Pinwheel galaxy therefore serves as a close-up example of what the environment might look like in a distant galaxy.In this image, infrared light with a wavelength of 3.6 microns is colored blue; 8-micron light is green; and 24-micron light is red. All three of Spitzer instruments were used in the study: the infrared array camera, the multiband imaging photometer and the infrared spectrograph. Other authors of the paper include Charles Engelbracht, George Rieke, Karl A. Misselt, J.D. Smith and Robert Kennicutt, Jr. of the University of Arizona, Tucson. Smith is also associated with the University of Toledo, Ohio, and Kennicutt is also associated with the University of Cambridge, England.NASA's Jet Propulsion Laboratory, Pasadena, Calif., manages the Spitzer Space Telescope mission for NASA's Science Mission Directorate, Washington. Science operations are conducted at the Spitzer Science Center at the California Institute of Technology, also in Pasadena. Caltech manages JPL for NASA. Spitzer's infrared array camera was built by NASA's Goddard Space Flight Center, Greenbelt, Md. The instrument's principal investigator is Giovanni Fazio of the Harvard-Smithsonian Center for Astrophysics. Spitzer's infrared spectrograph was built by Cornell University, Ithaca, N.Y. Its development was led by Jim Houck of Cornell. The multiband imaging photometer for Spitzer was built by Ball Aerospace Corporation, Boulder, Colo., and the University of Arizona, Tucson. Its principal investigator is George Rieke of the University of Arizona.For more information about Spitzer, visit http://www.spitzer.caltech.edu/spitzer and http://www.nasa.gov/spitzer.Whitney Clavin 818-354-4673/818-648-9734Jet Propulsion Laboratory, Pasadena, Calif.jpl2008-138 ssc2008-14
ssc2008-14
No Organics Zone Circles Pinwheel Galaxy
ssc2008-14a
The Pinwheel Galaxy, M101, in the Infrared
ssc2008-14b
September 28'Pandora's Cluster' Seen by Spitzer
September 8'Enterprise' Nebulae Seen by Spitzer (annotated)
August 29Age-Defying Star
August 26Spitzer's Journey
Feedback • Mailing List • Privacy Policy • Image Use Policy • Credits • Legacy Site
The Spitzer Space Telescope is a NASA mission managed by the Jet Propulsion Laboratory. This website is maintained by the Spitzer Science Center, located on the campus of the California Institute of Technology and part of NASA's Infrared Processing and Analysis Center. | 科技 |
2016-40/4923/en_head.json.gz/12832 | by Lord Andrew Turnbull
Illustration by Lars Leetaru
In most of the industrialized world, there is a growing consensus that nations must reexamine and restructure their energy portfolios. A number of factors have contributed to this awareness: increasing fuel prices, the insecurity of energy supplies, and the recognition that humanity must reduce its carbon dioxide (CO2) emissions to address global climate change. Even countries that have enjoyed steady supplies of electricity, transportation fuel, and heating fuel in the past will find it much more difficult to maintain control over those supplies in the future. Any responsible government that is not thinking seriously about its country’s energy investments today — from both the public and private sectors — risks being caught cold, powerless, and immobile in the future. But there is typically a five- to 10-year lag between an energy investment and the time the new capacity comes online. After that, countries are stuck with the facilities they have built for at least several decades. Thus, every major decision made now about energy involves a bet about the future. Because we don’t know which mix of fuels will be available or most useful in the coming years, how can investments best be allocated among natural gas, coal, oil, nuclear power, renewables, or improved energy efficiency? The debate over these choices is contentious. In my own country, the United Kingdom, there are heated arguments over whether nuclear power should be promoted or decommissioned; whether increased use of natural gas is or is not a viable option; and whether wind farms represent an ecological breakthrough or an inefficient blight on the countryside. In any given year, new energy technologies (hybrid cars, hydrogen fuel, biofuels) emerge and add to the contention. The only way that political and business decision makers can appropriately manage these options is through a flexible portfolio: not a choice about a particular mix of fuels but through an effective and resilient marketplace that can take advantage of economic principles to help us settle on the optimal combination of investments at any given time. In policy circles, this is coming to be known as the “modified market approach.” The government (or perhaps a regional political structure like the European Union) establishes a framework for energy prices. This framework incorporates the prices and costs of energy, as set by supply and demand, but also takes into account the social and ecological benefits and harms of each fuel source. Fuels that exacerbate climate change, for example, are made more expensive; fuels that reduce the danger cost less. An implied surcharge on carbon-based fuels reflects the desired CO2 reduction target. Once a rationale is agreed on, the government embeds the new framework in permits, surcharges, and regulations, after which the various technologies can effectively compete in the marketplace.
The modified market approach is a relatively recent innovation. Traditionally, governments have handled energy decisions in two ways: “Add it up” and “laissez-faire.” Adding it up is a time-honored approach. Government planners assess worldwide energy needs and generation capacity, make projections for the next 20 years, calculate the gap between future demand and supply, and decide which mix of fuels to subsidize, tax, or invest in. Flexible Portfolios Even at its best, this approach has many shortcomings. It is static; if energy technologies, supply constraints, or demand patterns change, another plan will be needed. It is also vulnerable to lobbying, with the verdict going to whichever pressure group shouts loudest that “our favorite fuel is better than yours.” And if political priorities change, the desired goals cannot be adjusted without a new plan. This makes it extremely difficult to take advantage of lower-cost opportunities, such as new technological developments, that emerge in later years. Faced with these complexities, many governments take the “laissez-faire” approach: they leave it to the oil and power industries to govern their own investments. But this method, too, has many shortcomings. Market prices do not capture such externalities as the environmental impact of fuels, and they do not recognize the complex interdependencies among different forms of energy, the infrastructure required to maintain them, the security of supply, the needs of customers, and the uncertainties of the future. To their credit, many governments are gravitating to the flexible portfolios of the modified market approach. In the U.K., for example, the government (controlled by the Labour Party) and the largest opposition party (the Conservative Party) agree on four basic objectives: to reduce CO2 emissions; keep the economy competitive (by reducing the price of energy); maintain security of supply; and ensure that people on all levels of society, including the poor, have heat and mobility. Both parties have encapsulated these objectives in a rationale that, when it is complete, will allow energy sources to compete and evolve, without regulators and investors having to predict in advance precisely which technologies will be adopted by the market. For a modified market approach to succeed, there must first be a clear set of targets for the reduction of CO2 emissions. In setting the formulas that determine a nation’s energy portfolio, we should favor not merely the cheapest fuels, but the optimal fuel mix that adjusts over time, operates effectively during scarcities and surpluses, produces energy when the wind blows and when it doesn’t, and is independent of the vagaries of international oil politics (it must be viable whether or not a nation is on good terms with Russia, Nigeria, Saudi Arabia, Iran, or Venezuela). To accomplish this, three elements, in particular, must be reflected: the ecological impact of carbon, the variability of fuel supply, and the costs of energy security. If we understand how to factor in those elements, then all available energy sources — oil, gas, coal, alternative fuels, hydrogen, nuclear power, hydropower, and renewables — can compete on an equal footing. Environmental Shadow Prices In the coming years, faced with general climate change and more extreme weather patterns, every government will have to make a decision: To what levels must carbon fuel emissions be reduced to affect the rate of global warming, and by what year? No government can ignore this imperative for long. There is a growing body of opinion which (drawing on conclusions from such groups as the International Intergovernmental Panel on Climate Change and the American Association for the Advancement of Science) recognizes that human activities are contributing significantly to the dangers of global warming. Already, serious efforts to mitigate climate change are moving forward, with those political leaders who refuse to participate finding themselves marginalized; for example, a July 2006 greenhouse gas reduction agreement between U.K. Prime Minister Tony Blair and California Governor Arnold Schwarzenegger bypassed Washington completely. The effective modified market approach must reflect the real environmental costs of different fuels. Having set a CO2 reduction target — taking into account the estimated effects of global warming on sea levels, crops, and the weather, and the destruction such effects could cause — governments must engineer a “shadow price of carbon” that delivers that target. The new framework would, in effect, modify the price of every fuel and technology, reflecting the increased risks caused by its CO2 emissions, while exempting fuels and technologies that emit little or no CO2.
The process for setting a shadow price must be transparent enough to draw open criticism from both environmental and economic experts, and robust enough to meet or incorporate that criticism without losing scientific credibility. It must also be consistent enough to enable suppliers to make informed predictions about costs and to set prices with confidence. Documentation would be required for each estimated cost — and costs would be revisited periodically to take into account changes in technology, practices, and damage assessment techniques. Priorities could no longer be determined by pressure groups demanding, for example, expansions of natural gas lines, bans on nuclear power, or restrictions on windmills. In its approach to climate change, the E.U. has adopted a modified market system, at least in principle. The Emissions Trading Scheme (ETS), introduced in 2005, is still (as of late 2006) in the first phase of implementation. Each member country proposes a cap on greenhouse gases emitted by power plants and other major industrial sites; the E.U. approves the caps; and then companies are granted permits to operate within those caps. Carbon-profligate companies can buy more emissions rights from carbon-frugal companies, giving everyone more incentives for lowering emissions and building efficiencies. But the ETS is an imperfect work in progress, in which political horse-trading overrides the best scientific judgment. The caps were so generous in the first year that no countries were forced to reduce total CO2 emissions — which (as many observers noted) undercut the entire purpose of the initiative. In the end, it is not clear whether the ETS will have the political will to overcome bargaining on the part of special interest groups, but only a tough stand will allow it to deliver a true shadow price for carbon that genuinely leads to the CO2 reductions required to mitigate climate change. Because the ETS is still embryonic, most countries in the E.U. are retaining a national carbon or energy tax. This represents a significant structural difference: Trading systems, which fix the level of permitted emissions and allow the price to vary, tend to be more effective at capping emissions than tax systems, which fix the price and allow the amount of pollution to vary. Tax systems are also more prone to the arbitrariness of top-down control; the U.K.’s Climate Change Levy, for example, is a confused mixture of energy and carbon tax, levying on nuclear power, even though it is a low source of CO2.
The Price of Volatility Emissions trading programs represent a valuable first step, but because they don’t take the other uncertainties of the sector into account, they alone are not an adequate means of guiding energy investment. Volatility adds cost to any portfolio. Investors know this well; they diversify across a variety of assets, balancing their requirements for growth and security. A good modified market energy portfolio should do the same, taking into account the volatility of the availability and price of different fuels. Natural gas, as the world has witnessed, can fluctuate enormously. In the U.K., the spot price of natural gas doubled between 2004 and 2006. Even more damaging were two price spikes, in which U.K. gas prices briefly rose about 400 percent. Importing nations, in particular, have little recourse if suppliers raise prices suddenly (as Russia’s Gazprom has done) or supplies approach a natural peak (as has been predicted for oil). Other fuels are relatively stable; once reactors are built, the price of nuclear power remains relatively constant. Nuclear power can therefore take the role that bonds play in a pension fund: not necessarily the highest-yielding asset, but one that reduces volatility. Another source of uncertainty that needs to be addressed arises from the protracted and uncertain nature of planning and licensing regulations. These are particularly damaging to highly capital-intensive options, such as the building of new liquid natural gas (LNG) or nuclear power facilities, or the recovery of heat from waste incineration. The U.K. government is proposing to address this uncertainty by allowing licensing of technologies to run in parallel with the planning process. Any resumption of nuclear construction should be preceded by agreement on a strategy for disposal of nuclear wastes (though those energy sources emitting CO2 as a waste are not required to meet an equivalent constraint). We should also insist that enough funds be allocated for waste disposal and decommissioning of plants, lodged outside the producer’s balance sheet. If nuclear power can compete with the benefit of the carbon adjustment while meeting its waste and decommissioning costs in full, then it should find a place in the energy mix. Conversely, if it is still uneconomical, it should not. And the same logic should apply to other technologies, including renewables. There is no reason why established renewable energy technologies such as wind power should receive both the preference of the CO2 adjustment and a guaranteed market share (as is currently the case in the U.K.). Recent analysis conducted by the U.K. government shows that nuclear power would be viable over a wide range of scenarios. It would struggle to compete only if gas prices and the shadow price of carbon were both low. That combination is inherently implausible, however; it would almost certainly lead to a higher shadow price for carbon, bringing nuclear power back into contention. During my tenure as Cabinet secretary, I saw the shortcomings of addressing the energy supply in piecemeal fashion. Although there were two attempts to write an energy policy paper, at the time no one wanted to challenge prevailing assumptions — for example, the assumption that greater energy efficiency, renewables (such as wind power), and natural gas would provide enough carbon reduction in and of themselves. Such assumptions were undermined when the price of energy shot up, and the Russians and others reminded us of the vulnerabilities of natural gas. But as I write, a consensus is building in Europe and North America with respect to global climate change and energy security, and it is coupled with a growing sense of urgency. We now have a moment of opportunity to create a framework that enables the essential energy choices to be made — not by dictating them, but by providing open competition and building all the relevant factors into the marketplace where choices are made.
Reprint No. 06404
Author Profile: Lord Andrew Turnbull ([email protected]) is a senior advisor to Booz Allen Hamilton based in London. Between 2002 and 2005 he was secretary of the Cabinet and head of the Home Civil Service in the United Kingdom. ...shares
Topics: pricing, technology, operations, politics, transportation
What Can the Cola Wars Teach Us about Brand Loyalty?A Little Distance from Colleagues May Spike CreativityBest Business Books 2015: EconomicsThe Auto Industry’s Real ChallengeTurning Breakfast Waste into Food for CommerceThree Lessons from 9/11, 15 Years OnTailoring Your Approach to Consumers in Different CountriesPro-LGBT Companies in Anti-LGBT CountriesVineet Jain on Leading India’s Media into the FutureHedge Fund, Meet HighwayThe Double-Edged Sword of Overseas ExperienceWage Pressure
Toward a Flexible Energy Future | 科技 |
2016-40/4923/en_head.json.gz/12872 | Blu-ray boss: HD DVD is not good enough
By James Rivington
It's no good and is outmatched by Blu-ray, apparently
Shares The HD DVD disc format is inferior to Blu-ray and does not offer enough capacity for modern day HD movies. That's the view of Frank Simonis, the chairman of the Blu-ray Disc Association, who made the comments in an interview with Tech.co.uk. "The majority of Hollywood studios have chosen Blu-ray not because we have asked them to - we let them make up their own mind. They choose Blu-ray because it is the superior format," Simonis said. "For Disney to do the movie Cars in HD including the interactivity, a 50GB disc is needed. Pirates Of The Caribbean you couldn't do on an HD DVD disc. You'd have to have multiple discs. How can that be a good thing? "Should you stop half way and say 'let's have a pee and then continue'? No that does not work. The movie is that long. And HD DVD is simply not good enough to carry that, or it would discriminate the quality. HD DVD too small?"Take Disney's Cars. You look at the depth of the graphical animation in that movie and HD DVD would discriminate that part. Therefore the studio's choice was clear. HD DVD is not good enough. Blu-ray is the only format which has the 50GB."HD DVD tried to counter that with the statement that they could make a three-layer disc, but that's just a statement. I have never seen any product that can handle that, and nor did I see the title ever coming alive. They create mist in a market where they should in fact clarify the situation and satisfy the consumer's needs," Simonis said.He also went on to say that it is not true that HD DVD players are cheaper to produce than Blu-ray ones. He said that Toshiba and other HD DVD manufacturers are heavily subsidising the players to make them a more attractive proposition."Both format players use the same back-end decoders so the video and the audio is nearly similar. And I can't imagine that an optical mechanism for HD DVD and Blu-ray has a big spread in cost - the expensive part is the blue laser. Technology-wise, there is not a major difference."They do something else in their marketing [to make this possible]. Now, I don't know how deep their pockets are but I hear the rumour that they spent $150m just to get one studio over. Perhaps they like to play that game, and I wish them a lot of luck."Because at the end of the day we believe in fair competition between the different hardware manufacturers to make a business all based on our own model. Not on subsidised pricing in the products."Frank Simonis is also the senior director of communications at Philips.Read: HD format war over in 18 months?Blu-ray camp: Microsoft is not the Holy GrailArticle continues below | 科技 |
2016-40/4923/en_head.json.gz/12933 | Netflix CEO Reed Hastings shows off the company's set top box at Netflix headquarters in Los Gatos, Calif. (Paul Sakuma/AP)
Netflix CEO Reed Hastings shows off the company's set top box at Netflix headquarters in Los Gatos, Calif.(Paul Sakuma/AP)
Netflix to shake up Canadian industry
Ottawa — The Globe and Mail
Monday, Jul. 19, 2010 7:38PM EDT
This is one movie that the incumbents in the Canadian video-rental industry would prefer not to see.
Netflix announced on Monday it would bring its streaming online movie rental service to Canada this fall, marking a long-awaited international expansion for the U.S. company, which rents movies there by mail and online.
Netflix's entry into Canada poses a serious threat to players like Blockbuster Canada and Rogers Communications Inc. which, until now, have largely avoided the pricing wars that have decimated brick-and-mortar rental chains in the U.S.
Since it was founded in 1997, Netflix has helped to create an upheaval in the rental market south of the border, kicking off a pricing war aided by automated rental kiosks like the Redbox machines operated by Coinstar. In-store rentals now account for less than half of the rental market in the U.S.
The picture is very different in Canada, where rental stores still dominate the market and generate a steady stream of profits The online movie rental business accounts for just 0.6 per cent of the Canadian market, and kiosks take up only 1.4 per cent, according to Convergence Consulting Group, which tracks the industry.
That will change in the months ahead. Netflix is just the first in a series of challengers looking to take a piece of the $1.3-billion Canadian market for movie rentals.
One such challenger is Zip.ca. At its distribution centre near the Ottawa airport, a worker in white gloves holds a DVD over a machine with a white fabric disc, pressing a button that makes it whirr loudly as it delivers a high-speed polish.
Zip was created in 2004 to be essentially the Canadian version of Netflix's postal service: subscribers pay a monthly fee to receive movies through the mail. When they are finished watching, they return the disc in a postage-paid envelope and Zip pulls another DVD from the customer's Internet wish list of titles and sends it off. Monthly subscriptions range from $11 to $50, depending on how many DVDs a customer wants to have out at a time.
"We have room for expansion, as you can see," said Rob Hall, chief executive officer of Zip's parent company, Momentous, as he gestured around the former FedEx shipping centre that houses the operations.
But whether from a lack of marketing or some other reason, Zip hasn't grown at the speed that Netflix did in the U.S., where it reaches 14 million subscribers. Zip doesn't disclose its membership numbers, but Convergence Consulting estimates its mail business at 50,000 subscribers as of the end of 2009.
"The disruption in the channel in the U.S. is that the value proposition was quite good … mail undercut the store business," said Brahm Eiley, a principal at Convergence. "In contrast to its American cousin, [Zip is]more expensive, and its library is smaller."
Enter the vending machine
Zip is also attempting to compete in the world of automated rental kiosks.
"One-dollar-a-day rental changed the market in the U.S. We're hoping to change it here too," said Zip.ca CEO Scott Richards.
Zip is finishing a pilot project with Metro grocery stores in Ottawa and Montreal, and in the fall will install the machines in 800 Metro stores across the country in a revenue-sharing deal.
Other players in the kiosk space had about 400 units in Canada at the end of 2009, compared with 27,000 kiosks in the U.S., meaning the market is still open for competition.
The new frontier: online streaming
There is no part of the rental business quite as poised for change in the coming months as online.
"Prior to the Internet, any street in America had a travel agency … and you don't see travel agencies that much any more," Netflix spokesperson Steve Swasey said.
The company is pursuing the digital market: On the day the iPad was released, Netflix released an application for the device. Netflix is also coming to the iPhone, and it has put out job postings for developers for the Android mobile platform. It has deals with Blu-Ray disc player manufacturers to connect its online content to TV sets.
Zip.ca also plans to launch an online rental service in the fall, and Cineplex, which dominates the theatrical movie market in Canada, will expand its online DVD store to allow customers to purchase movies for download, or rent a movie in the same streaming format.
"Netflix is starting with zero base in Canada. We have 70 million people coming through our doors annually. I think we understand the movie business," Cineplex CEO Ellis Jacob said.
Cable giant Rogers Communications Inc. is also looking into this side of the business: In addition to its brick-and-mortar rental stores, it has already launched Rogers On Demand Online to allow its subscribers to watch online whatever is available through their cable subscriptions. It is now looking at expanding the service to rent movies online, to complement its video-on-demand service that allows customers to rent through their television sets.
"We basically go where the customer goes," said John Boynton, executive vice-president of marketing for Rogers.
Even as competition heats up, Blockbuster Canada is confident in the strength of its traditional rental business, said Barry Guest, vice-president and general manager.
"The stores still play a prominent role in rental in the Canadian marketplace," he said. Blockbuster is also looking online, though he wouldn't give any details as to when such a service might be available. "It only makes sense for us to deliver our content any way customers would like."
Blockbuster is right to be confident, for now, Mr. Eiley said. But with a host of new competitors, the market could look very different soon.
"We're going to see change in this country ... but give it a couple of years," Mr. Eiley said. "There's going to be disruption."
How Netflix streaming works
Think of YouTube, but with full movies and TV episodes, and not free. Netflix streams content online - meaning you watch as it plays, as opposed to obtaining a copy that lives on your computer.
In the United States, Netflix operates a movie rental service that sends physical DVDs to its members through the mail; the streaming service comes along with that subscription, at a cost of about $9 (U.S.) a month for the cheapest option, which provides one mailed DVD at a time and unlimited online viewing.
In Canada, Netflix will not be operating a mail service. The company has not yet announced its pricing, but one could assume a monthly subscription fee would apply here as well, with the same unlimited online viewing of its movie library. | 科技 |
2016-40/4923/en_head.json.gz/12977 | This copy is for your personal, non-commercial use only. To order presentation-ready copies for distribution to your colleagues, clients or customers, click the "Reprints" link at the bottom of any article. March 7, 2013
Where the Wind Blows: Renewables in Europe You might never guess that Jimmy Carter was the first U.S. president to actively support renewable energy by putting solar panels on the roof of the White House. Today the U.S. gets only 6% of its power from renewables. Elsewhere in the world, it’s an entirely different story.
Take Germany, for instance, which in 2000 was already at 6%. That year, it passed the Renewable Energy Act and embarked on its Energiewende (energy movement). At the end of 2012 it generated more than 25% of its energy from renewables—solar, wind, and biomass. It has also succeeded in passing all its benchmarks on the drive to boost that total to 80% by 2050. It’s doing so well, in fact, that it’s upped its 2020 benchmark from 30% to 35% and intends to shut down all nuclear power generation by 2022—a decision made in the wake of the Fukushima nuclear disaster in Japan.
Germany has pursued solar with a zeal found in few other locales, boasing a third of the installed solar capacity in the world.
Utility companies are not the drivers of renewables in Germany. Individuals are, owning 65% of the country’s renewable energy capacity. Utilities, on the other hand, account for only 6.5% of the renewables sector, continuing to focus on fossil fuels.
But renewable energy subsidies are on the chopping block as German officials seek to limit rising energy bills—which help pay for those subsidies—and that looks likely to limit the rate of growth. In addition, a slower economy has already taken a toll on energy consumption.
That reduction in demand has in a way made renewables in Germany victims of their own success. A recent Fitch Ratings report said as much: “Fundamental changes are taking place in German power generation stemming from the implementation of the government’s new energy policy, the Energiewende, utilities’ decisions on capex and asset decommissioning …” However, structural change in the energy market “is also driven by subdued growth in the wider economy and overcapacity that is likely to persist in the coming years on the back of growth in renewables and thermal capacity,” according to the report.
Spain, too, has set a record for energy generation from wind power, according to the blog of the Spanish Wind Energy Association. It now gets about 25% of its energy needs from wind, and from the beginning of November wind was its largest single source of electricity. It also gets about 5% of its electricity from solar.
Here, too, all is not well. The renewables industry exploded in Spain over the past 10 years with the help of the government, which capped power prices and heavily subsidized the sector. Investors from all over the world flocked to take part in its growth, as the favorable environment promised healthy returns.
Years of austerity spurred by the debt crisis are causing the government to rethink its policies—retroactively. In mid-February, the Spanish parliament passed legislation that cut subsidies for renewables; that came on top of a law that taxed power generation but had an outsize impact on renewables. There are, according to Fitch, “at least four negative implications as a consequence of the approval of this royal decree: an uncertain future for renewable projects; legal claims against the government’s decision; a weakening of the investment environment; and increased policy risk.” And Spain is not alone.
Those legal claims are already being pursued: international investors from the U.S., the United Arab Emirates, Germany and Japan, among others, are filing suit against the Spanish government under the internationally ratified Energy Charter Treaty, which binds members to rules on energy and arbitration.
“[Southern renewables] are under pressure," writes Federico Gronda, an analyst at Fitch. "Fitch Ratings differentiates between renewable projects in Northern Europe, for which the outlook is stable, and projects in Southern Europe, for which the outlook is negative. Southern European projects are exposed to the risk of the imposition of tax increases and additional operating requirements, as recently observed in Spain and Italy.” The renewables champion, appropriately enough, is in the north. Iceland gets 100% of its electricity needs from renewables. Geothermal contributes 25%, accounting for some 87% of the country’s needs for heat and hot water, while large hydro takes care of the other 75%. Together, the two sources make up 87% of Iceland’s needs not just for electricity, but also for heat and transportation.
Generation is accomplished in an efficient way, too, with its low and stable electricity cost attracting aluminum smelters, and the ore as well, to its shores—which, in turn, benefits its economy. In fact, President Ólafur Ragnar Grímsson has credited Iceland’s clean energy with helping the country to emerge from its financial meltdown in 2008.
Grimsson pointed out as far back as a 2011 Poptech conference speech that reliable sources of renewable energy offer protection against spikes in the cost of fossil fuel. Not only that, but the country has drawn substantial business investments, not just from the aforementioned aluminum smelters, but also from companies looking for locations to build data centers. And those investments have drawn others, enabling Iceland to recover far more quickly than it might have otherwise.
Consultant McKinsey & Co. has counseled Iceland to take it a step further and export renewable power to Europe as a means to sustain its recovery and build growth. And Iceland is exploring the possibility of doing exactly that; it approached Britain last year about the possibility of an underwater power cable to Scotland that could provide the island nation with as much power as a typical nuclear reactor.
By Marlene Y. Satter | 科技 |
2016-40/4914/en_head.json.gz/20056 | Optional Member Code Researchers Say Oil Dispersants Still an Issue in the Gulf
Wednesday, 20 April 2011 09:43 By Mike Ludwig, Truthout | Report font size
A boat wades through the oily waters of the Gulf of Mexico, on June 16, 2010. The water has an iridescent rainbow sheen from the dispersant used to break up the crude oil spill. (Photo: kk+)
Scientists are still working to understand the ecological and human health impacts of the environmental disaster that followed BP's Deepwater Horizon blowout in the Gulf of Mexico one year ago. While it may too soon to identify the long-term consequences of the disaster, a growing body of evidence reveals that the massive release of oil combined with the unprecedented amount of chemical oil dispersants applied by BP is still an environmental threat a year later.
Truthout reported on BP's decision to exclusively use the controversial dispersants Corexit 9500 and Corexit 9527 in early June 2010, when conservationists blamed the chemicals for massive fish kills and health agencies reported that the chemicals were making people sick. Research conducted in the past year suggests that Corexit, combined with dispersed oil in broad undersea plumes, could have been the culprit.
Dispersants like Corexit do not eliminate oil, but break it down into tiny, more biodegradable droplets that are less visible on the surface and can sink to the bottom. Nalco, the company that currently manufactures Corexit, claimed the chemicals were safer than dish soap and would decompose in 28 days. Scientific research conducted since the disaster, however, shows components of Corexit and dispersed oil lingered in Gulf waters much longer and could still be in the food chain.
In late May 2010, the Environmental Protection Agency urged BP to use dispersants thought to be safer and more effective, but BP argued that the Corexit line was the best choice and bought up large reserves of the chemical. BP continued to exclusively use Corexit dispersants even after it was revealed that Nalco's board of directors includes Rodney Chase, who spent 38 years with BP and 11 years on BP's executive board. A report released this week by watchdog group Food and Water Watch (FWW) reveals that Nalco has shown tremendous revenue gains as a result of $70 billion in dispersant sales to BP.
An unprecedented 1.84 million gallons of Corexit were added to the Gulf of Mexico over several months after the blowout, according to the FWW report. Nearly a million gallons of Corexit were applied near the leaking wellhead below the Deepwater Horizon, a novel and unprecedented application technique that has caused some experts concern over the long-term health of marine life.
Corexit applied in deep water was trapped in layers of the ocean and traveled on ocean currents, and a team of researchers with the University of Georgia found one chemical component of Corexit had not degraded by December 2010. This persistence, the FWW report claims, raises concerns about long-term impacts of the dispersants and shows that wildlife and seafood eaters may have been exposed to the chemicals for a longer period of time that previously thought.
"We're still extremely worried about the underwater plumes of oil and dispersant since they're even more toxic than dispersant sprayed on the top of the water," said FWW Director Wenonah Hauter. "The dispersed oil in plumes is more easily absorbed and consumed by marine animals. We should definitely consider this when researching the dolphin and sea turtle deaths. A year later, the body count keeps rising."
The FWW points out that, on March 11, the National Oceanic and Atmospheric Administration declared an "unusual mortality event" after more than 80 dead dolphins washed up on Gulf state shores between mid-January and early March. As of April 7, 153 dead dolphins were found, and experts believe the actual death count to be as high as 7,650. Many of the dolphins were premature, stillborn or newborn. The carcasses of hundreds of turtles and other endangered species were also found.
"Basic physiology suggests that dispersed oil will negatively impact the reproductive capabilities of a wide variety of animals," said Richard Condrey, an associate professor at Louisiana State University who specializes in coastal ecology and fisheries.
Researchers believe Corexit also made hundreds of people sick during the disaster in the Gulf, and efforts are underway to determine potential long-term impacts on human health. Corexit 9527 was one of the dispersants used to clean up the 1989 Exxon Valdez spill in Alaska. Nearly 7,000 cleanup workers reported feeling ill with breathing problems at the time, and chemicals in Corexit have long been suspected to be the culprits. FWW reports that the average age of death for Exxon Valdez cleanup workers is 50 years.
Despite these warning signs, BP chose to use Corexit exclusively. In early August, 275 oil and cleanup workers and 84 members of the general public reported "spill-related health problems" consistent with symptoms of exposure to Corexit, according to the FWW report. A door-to-door survey taken 11 days after the well was capped found that 48 percent of people living in coastal communities in Louisiana reported having short-term bouts of coughing, headaches, rashes, and other symptoms consistent with chemical exposure.
Although controversial, Corexit did keep large quantities of oil from washing up on America's beaches. Critics, however, say widespread use of the dispersant on the surface and below the Gulf was a big experiment and safer products and methods should have been considered. The decision to apply Corexit with unconventional methods was a hasty one, the FWW report concludes, and only long-term research will reveal its full impact in the Gulf of Mexico.
Mike Ludwig
Mike Ludwig is an investigative reporter at Truthout and a contributor to the Truthout anthology, Who Do You Serve, Who Do You Protect? Follow him on Twitter: @ludwig_mike. Show Comments
Teen Accused of Stealing 65-Cent Carton of Milk at Middle School to Stand Trial - The Washington Post Hacked: Clinton's Musings on Sanders Fans, Nuclear Arms, Occupying Center-Right - Common Dreams Sacramento Police Tried to Run Over Homeless Man With Mental Illness Before Shooting Him, Recordings Reveal - The Sacramento Bee Researchers Say Oil Dispersants Still an Issue in the Gulf | 科技 |
2016-40/4914/en_head.json.gz/20090 | Trade in the Digital Age: Can e-Residency be an enabler for Asia-Pacific Developing Countries? (Trade Insights: Issue No. 17)20 Apr 2016Working paper series The advent of the digital age in international trade has opened new possibilities for countries at all stages of development. Digital trade can support the achievement of the United Nations Sustainable Development Goals (SDGs) and increase economic prosperity worldwide. However, many developing economies, and particularly least developed countries, often lack the digital infrastructure and legal and policy frameworks to enable their citizens to seize these opportunities. Building e-resilience: Enhancing the role of ICTs for Disaster Risk Management (DRM)12 Apr 2016Books This report highlighted some emerging technologies such as the use of Big Data for DRM purposes. It is one that is still being explored but has so far demonstrated immense potential. However, along with it come significant challenges that have to be overcome in order to truly benefit from real-time use of MNBD. Utilizing new sources of data such as MNBD and even social media for assisting in predicting emerging trends and shocks as well as for building greater resilience is still an emergent field. Working Paper Series (SD/WP/02/April 2016): Asymmetries in International Merchandise Trade Statistics: A case study of selected countries in Asia-Pacific5 Apr 2016Working paper series This working paper introduces the concept of bilateral asymmetries in international merchandise trade statistics (IMTS), i.e. the discrepancies that can be seen in reported bilateral trade flows between trading partners. Such discrepancies mean that the value of exports reported by one country does not equal to the value of imports reported by its partner, also called mirror data. These discrepancies impact bilateral trade balances and other economic variables reliant upon trade balance. Transformations for Sustainable Development: Promoting Environmental Sustainability in Asia and the Pacific 3 Apr 2016Books Asia and the Pacific is a dynamic region. Regional megatrends, such as urbanization, economic and trade integration and rising incomes and changing consumption patterns, are transforming its societies and economies while multiplying the environmental challenges. UNNExT Handbook on Implementing UN/CEFACT e-Business standards in agricultural trade31 Mar 2016Books This handbook presents a general framework for the implementation of e-Business standards in the agrifood sector. The handbook looks specifically at four e-Business standards developed by UN/CEFACT in the areas of electronic phytosanitary certificates; electronic reporting of sustainable fishery management; electronic exchange of laboratory analysis results; and management and exchange for certificates for trade in CITES controlled species. Impacts of Imported Technology in Asia-Pacific Developing Countries: Evidence from Firm-Level Data (Trade Insights: Issue No. 16)24 Mar 2016Working paper series The expansion of technological capabilities among firms in developing countries has often been linked to international integration. Access to larger pools of higher-quality intermediate inputs, as well as the opportunity to employ technology developed in other countries, can stimulate firms to undertake innovative activities and develop new products. This note explores these linkages making use of a firm-level dataset obtained from the World Bank Enterprise Surveys containing information on 22,466 firms across 19 Asia-Pacific economies and 18 industrial sectors. United Nations World Water Development 2016 -- Water and Jobs The United Nations World Water Development Report 2016 - "Water and Jobs"23 Mar 2016Books Water is an essential component of national and local economies, and is needed to create and maintain jobs across all sectors of the economy. Half of the global workforce is employed in eight water and natural resource-dependent industries: agriculture, forestry, fisheries, energy, resource-intensive manufacturing, recycling, building and transport. Disasters in Asia and the Pacific: 2015 Year in Review10 Mar 2016Books This study is part of an annual series, developed by the Information and Communications Technology and Disaster Risk Reduction Division of ESCAP. It provides a yearly overview of natural disasters in the Asia-Pacific region and its impacts. Building e-Resilience in Mongolia, Enhancing the Role of Information and Communications Technology for Disaster Risk Management4 Mar 2016Books The Information and Communications Technology and Disaster Risk Reduction Division (IDD) of the United Nations Economic and Social Commission for Asia and the Pacific (ESCAP) has conducted a series of research on building e-resilience that examines the use of information and communications technology (ICT) for disaster risk reduction (DRR) in selected Asia-Pacific countries. Building e-Resilience in Sri Lanka, Enhancing the Role of Information and Communications Technology for Disaster Risk Management4 Mar 2016Books Disasters affect multiple facets of human life. Therefore, disaster risk management (DRM) requires multiple mechanisms across different silos in order to prepare for and deal with all types of disasters. The multiple mechanisms will most definitely require collaboration at the international or regional level, and coordination with government at the national and local levels, with community organizations and with individuals. In all these instances, effective communication is critical. Pages« first | 科技 |
2016-40/4914/en_head.json.gz/20099 | Home / Science News Revived technique could yield diesel fuel
Nov. 7, 2012 at 7:47 PM Follow @upi Comments
BERKELEY, Calif., Nov. 7 (UPI) -- A long-abandoned fermentation technique once used to make explosives can be used to produce renewable diesel fuel to replace fossil fuels, U.S. researchers say.
Chemists and chemical engineers at the University of California, Berkley, said they've produced diesel fuel from the products of a bacterial fermentation process discovered nearly 100 years ago.
Those products can extracted and catalytically altered to make a fuel that burns like diesel, they said.
The fermentation process was discovered in 1914, around the start of World War I, and allowed Britain to produce acetone needed to manufacture cordite, used as a military explosive to replace gunpowder.
The process employs the bacterium Colostrum acetobutylicum to ferment sugars into acetone, butanol and ethanol. A catalyst then converts the ideally proportioned brew into a mix of long-chain hydrocarbons that resembles the combination of hydrocarbons in diesel fuel.
The resulting product was found to burn as well as normal petroleum-based diesel fuel.
While the fuel's cost is still higher than that of diesel or gasoline made from fossil fuels, the process would drastically reduce greenhouse gas emissions from transportation that contribute to global climate change, the researchers said.
"Diesel could put Clostridium back in business, helping us to reduce global warming," biomolecular engineering Professor Douglas Clark said. "That is one of the main drivers behind this research."
New technique can produce 'biogasoline' to compete with biodiesel
Analysis: India's bio-diesel industry
New way proposed to make energy from waste
Report: More natural gas drilling needed | 科技 |
2016-40/4914/en_head.json.gz/20134 | Science & Technology Drone Company Demos How Blood Air-drops Will Work in Rwanda April 04, 2016 10:46 AM
FILE - A prototype drone is seen carrying a parcel for delivery.
LOS ANGELES — Drone delivery might be years away in the U.S., but it's becoming a reality in Rwanda this summer.
A San Francisco-based drone delivery company says it'll start making its first deliveries of blood and medicine in Rwanda in July.
Zipline International Inc., backed by tech heavyweights like Sequoia Capital and Google Ventures, demonstrated its technology for journalists last week in an open field in the San Francisco Bay area.
In a demo broadcast on Periscope on Friday, a staffer launched a fixed-wing plane weighing just 22 pounds off a launcher that used compressed air.
Electric-powered propellers took it the rest of the way, on a flight that could extend to 75 miles round trip, using military-grade GPS and software to navigate.
As it dipped low before the drop-off area, the bottom popped open, and a cardboard box with a parachute made of butcher paper and biodegradable tape burst out, plopping to the ground a few steps away from CEO Keller Rinaudo, who walked over to retrieve it.
“You have a database of people. You know their lives are in danger,” he said. “Can you get them what they need fast enough? That's been the mission from the start.”
Company executives said the cost of each flight was about the same as a motorcycle trip, but far more reliable.
And because deliveries of packages up to 3.5 pounds could be completed in 15-30 minutes, modest packaging eliminated the need for refrigeration along the way, which saves on wasted supplies such as blood.
“We leapfrog broken refrigerators, we leapfrog the lack of roads,” said Keenan Wyrobek, Zipline's head of product and engineering.
Two hubs contained in modified shipping containers with 10 to 15 planes each are all that's required to serve all of Rwanda, the company says. The Rwandan government announced its deal with Zipline in February.
It plans to operate in other countries later this year if it proves it can operate successfully in Rwanda.
Rinaudo says the company for now is focused on medical supply delivery in emerging economies where there is less air traffic and regulations are easier to deal with than in the U.S. or Europe.
“The U.S. has one of the most complicated airspaces in the world and for that reason the [Federal Aviation Administration] is even more risk-averse than most regulators,” he said. “So I think where this will start is in environments where the need is incredibly high and the airspace is relatively empty.” | 科技 |
2016-40/4914/en_head.json.gz/20196 | First site licence for UK new build
The UK's Office for Nuclear Regulation (ONR) has granted a site licence for EDF Energy's planned Hinkley Point C power station. It is the first new site licence to be awarded for a UK nuclear power station in 25 years, although further consent will be needed before construction can begin.
How Hinkley Point C could look (Image: EDF Energy)
ONR's decision to award the licence to EDF Energy new build subsidiary NNB Generation Company (NNB GenCo) is the culmination of over three years of assessment by the regulator. Although the licence does not confer permission to begin construction of nuclear-safety related plant at the site in Somerset in south-western England, it is nevertheless a significant step towards new build in the UK.
The site licence enhances regulatory control of activities associated with designing and building nuclear facilities at the site, and requires NNB GenCo to comply with various conditions. UK chief nuclear inspector Mike Weightman noted, "these conditions provide ONR with the necessary regulatory powers to ensure the protection of people and society from the hazards associated with such nuclear power generation."
Further consent from ONR, permits from UK environmental regulator the Environment Agency and formal planning consent will be needed before construction of the two EPRs planned for Hinkley Point C can begin in earnest. The EPR design is currently undergoing a generic design assessment by the two UK regulators, a process that the ONR says could be completed by the end of 2012 if all outstanding issues are resolved.
EDF Energy expects to make its final investment decision on Hinkley Point C by the end of the year, but the company's new build managing director Humphrey Cadoux-Hudson said the award of the site licence was a crucial step towards the construction of the reactors, serving as a "vote of confidence" in the company. "We remain focused on putting the components in place that will enable a final investment decision to be made at the earliest possible date," he said. Recently announced UK energy market reforms should influence that decision.
UK Energy Bill approachesEC blessing for new UK plantLast stage of UK reactor licensing
EDF EnergyEDF Energy Hinkley Point
Plant licensing, New build, United Kingdom | 科技 |