id
stringlengths 30
34
| text
stringlengths 0
71.3k
| industry_type
stringclasses 1
value |
---|---|---|
2016-40/3983/en_head.json.gz/9244 | Read More in News »
SuSE Delivers First Linux for 64-bit AMD Opteron
The 64-bit x86 Linux Race Heats Up
By Steven J. Vaughan-Nichols When most Linux users think 64-bit computers they think of Intel's Itanium II--and why not? HP is making Itanium the centerpiece of its Linux plans and Red Hat and the UnitedLinux companies have all announced Linuxes for Intel's troubled server chip. Don't look now, but, thanks to SuSE, Linux users will soon have two current processors to consider for their 64-bit x86 computing needs: Itanium and AMD's Opteron.
Burdened with the unwieldy title of SuSE Linux Enterprise Server 8 for AMD64, Powered by UnitedLinux, SuSE is delivering the first Linux for the Opteron. They won't be the last, though. According to Jay Migliaccio, SuSE product manager, the UnitedLinux consortium will be announcing support for the chip this week, meaning at least three other Linux distributors--Conectiva, SCO, and Turbolinux will soon be able to deliver their own Linuxes for Opteron. And, the operative word is "soon." Migliaccio says that SuSE's operating system, based on the Linux 2.4.19 kernel, will be shipping within a week of the announcement. Indeed, SuSE has already lined up eight distribution partners for SuSE 8 for AMD64 including Appro, a server clusters vendor; M&A Technology, a manufacturer of application specific servers; the longtime and well known Linux server provider, Penguin Computing; PSSC Labs, a provider of Beowulf supercomputers; RackSaver, a high-density, rack-optimized servers, blade servers and supercomputing clusters OEM; and Tempest Computers, a firm specializing in the design, assembly and integration of high-end, high-availability servers and high performance server clusters.
You might wonder how SuSE can forge forward so quickly with an enterprise-class processor for servers and workstations that AMD itself won't be officially unveiling until April 22nd. The answer is that AMD has been working hand in glove with SuSE since 2000 on bringing Linux to the Opteron. "AMD has worked closely with SuSE Linux to develop an enterprise-class operating system and development tools for AMD64," said Marty Seyer, vice president and general manager of AMD's Microprocessor Business Unit.
Of course, the real question isn't how they did it, it's how well will it will work. SuSE is sure that its distribution for Opteron will not just continue to expand SuSE Linux users' hardware choices-SuSE Enterprise Server will now be available on IBM's zSeries, pSeries, iSeries, Intel 32-bit designs, Itanium--but that application designers and network administrators will see immediate performance gains.
"As Migliaccio says, even without optimizing an existing 32-bit program for the new chip's architecture, "developers and users can expect to see a significant improvement boost simply by recompiling their applications with the operating system's 64-bit libraries. Even if gcc and make are mysteries to you though, Migliaccio assures users that their older 32-bit applications that run on SuSE Linux will run unchanged on the new platform.
That's not just SuSE talking. "SuSE has worked closely with us from the beginning on both our 32-bit and 64-bit efforts on the Opteron processor," said Dave Dargo, vice president, Platform Alliances, Oracle. "SuSE's leadership in being the first 64-bit Linux OS provider on the Opteron processor, combined with the availability of the industry leading Oracle9i Database on Opteron shows the desire of Oracle and SuSE to lower our customers' cost of computing while providing high performance, scalability and enterprise functionality."
According to Joe Eckert, SuSE VP of Corporate Communications, "SuSE Linux Enterprise Server 8 provides improved scalability for up to 64 processors and up to 512 GB of main memory. These characteristics make SuSE Linux Enterprise Server 8 an ideal computing cluster platform for 32- and 64-bit high-performance computing solutions and environments with advanced speed and scalability requirements. It will also come with the standard Linux server packages such as Apache, perl, MySQL, Samba and Sendmail" and "all necessary components for building
C and C++ applications both for 32-bit x86 and 64-bit AMD64 code."
The new SuSE Enterprise Server will retail for $448. That includes four CDs, documentation, and the SuSE Linux Maintenance Program for one CPU for 12 months. So, while Opteron may be new to you and the Linux world, thanks to SuSE, the Opteron is a high-end chip that will be in Linux business-ready systems from the get-go. | 科技 |
2016-40/3983/en_head.json.gz/9259 | This is Tim: Apple's CEO on iPad sales, China, Beats, IBM, and more Search Macworld
This is Tim: Apple's CEO on iPad sales, China, Beats, IBM, and more
A (very lightly) edited transcript of Tim Cook's comments from Apple's Q3 earnings call:
Serenity Caldwell
| 23 Jul 14
Highlights of the June quarter
It's been a very busy and exciting time at Apple, and I'd like to review some of the highlights of our June quarter. We hosted our best-ever Worldwide Developers Conference last month, with over 20 million people from around the world watching our keynote session, which is a new record. We've had overwhelming response from customers and developers to the new features we previewed in OS X Yosemite and iOS 8. Yosemite has been redesigned with a fresh look and powerful new apps, and iOS 8 is the biggest release since the launch of the App Store. With powerful Continuity features, these upcoming releases will allow Macs and iOS devices to work together in even smarter ways. Customers can start an activity like writing an email on one device and pass it to another, picking up where they left off without missing a beat. They'll even be able to make and receive iPhone calls on their Mac with just a click. These are features that only Apple can deliver.
With iOS 8, we've opened over 4000 APIs, providing more flexibility and opportunity for developers than ever before. iOS 8 provides developers with amazing new frameworks, enables wider use of Touch ID to securely authenticate users within apps, and lets developers further customize the user experience with major extensibility features such as third-party keyboards.
We've also introduced Swift, an innovative new programming language for both iOS and OS X. Swift is the result of the latest research on programming languages combined with decades of experience within Apple building platforms. It makes writing code interactive and fun, eliminates entire classes of unsafe code, and generates apps that run lightning fast. It's easy to learn, allowing even more people to dream big and create whole new categories of apps.
We believe our new OS releases, combined with Swift, will result in a huge leap forward for the Apple ecosystem, and we can't wait to see what developers will create with Yosemite, iOS 8, and Swift.
When we introduced iOS years ago, it was a revolutionary operating system for iPhone. Over the years, we've extended it to the iPod family with iPod touch, and later to a tablet form factor with iPad. An explosion of apps, accessories, and services for these devices has created an incredibly vibrant ecosystem.
We're extending iOS in even more dimensions as customers around the world make iPhones and iPads an essential part of their lives at home, at school, at work, and on the go. We're putting a huge effort into delivering the best experience for our customers wherever they use iOS. That includes a safe and intuitive user interface while driving, called CarPlay, which is being integrated by 29 major car brands including Audi, BMW, Ford, General Motors, Honda, Hyundai, Mercedes, Toyota, and Volvo, and aftermarket systems like Pioneer and Alpine.
We've created a new tool for developers, called HealthKit, which lets health and fitness apps work together and empowers customers to choose what health data they share. We're taking the first steps in this area in collaboration with the Mayo Clinic, whose new app can automatically receive data from a blood pressure app, for example, and share it with a physician. Or a nutrition app can inform fitness apps how many calories are being consumed each day. Our own Health app will provide an easy-to-read dashboard of all health and fitness data.
We're enabling new ways to control lights, and doors, and thermostats, and other connected devices around the house using Siri, with the HomeKit feature of iOS 8.
And in the Enterprise, we're including new security, productivity, and device management features in iOS 8. We've forged a relationship with IBM to deliver a new class of mobile business solutions to Enterprise customers around the world. We're working together to provide companies access to the power of big-data analytics right on every employee's iPhone or iPad.
Using Swift, we'll collaborate to bring over 100 mobile-first apps to Enterprise clients, each addressing a specific industry need or opportunity. This is a radical step for Enterprise, and opens up a large market opportunity for Apple. But more importantly, it's great for productivity and creativity of our Enterprise customers.
From the pocket, to the car, to the workplace, home, and gym, we have a very large vision of what iOS can be, and we're incredibly excited about our plans.
Turning to our financial results, today we're reporting record June quarter revenue thanks to the very strong performance of iPhone, Mac, and the continued growth of revenue from the Apple ecosystem. Our teams executed brilliantly during the quarter with earnings per share up 20 percent year-over-year, our highest growth rate in seven quarters. We sold over 35 million iPhones, setting a new third-quarter record. We generated healthy growth in our entry-priced, mid-tier, and lead iPhone categories. I'm especially happy about our progress in the BRIC countries, where iPhone sales were up a very strong 55 percent year over year.
We also had a record June quarter for Mac sales, with growth of 18 percent year over year in a market that is shrinking by 2 percent according to IDC's latest estimate. Demand has been very strong for our portables in particular, and we've had a great customer response to the new higher-performance, lower-priced MacBook Air.
It was another strong performance for the App Store, and the other services contributing to the thriving Apple ecosytem. In fact, for the first nine months of this fiscal year, the line item that we call iTunes software and services has been the fastest-growing part of our business. iTunes billings grew 25 percent year-over-year in the June quarter and reached an all-time quarterly high, thanks to the very strong results from the App Store. We're continuing to invest in our incredible ecosystem, which is a huge asset for Apple, and a very important differentiator of our customer experience.
iPad sales met our expectations, but we realize they didn't meet many of yours. Our sales were gated in part by a reduction in channel inventory, and in part by market softness in certain parts of the world. For example, IDC's latest estimate indicates a 5 percent overall decline in the U.S. tablet market as well as a decline in the western European tablet market in the June quarter.
But what's most important to us is that customers are enjoying their iPads and using them heavily. In a survey conducted in May by Changewave, iPad Air registered a 98 percent customer satisfaction rate, while iPad mini with Retina display received an astonishing 100 percent customer satisfaction rate. The survey also found that among people planning to purchase a tablet within 90 days, 63 percent planned to buy an iPad, and our own data indicates that more than half of customers purchasing an iPad are buying their very first iPad.
Another recent study, by Custora found that iPad accounts for 80 percent of all U.S. tablet-based e-commerce purchases. We're very bullish about the future of the tablet market, and we're confident that we can continue to bring significant innovation to this category through hardware, software, and services.
We think our partnership with IBM, providing a new generation of mobile Enterprise applications, designed with iPad's legendary ease of use and backed by IBM's cloud services and data analytics, will be one such catalyst for further iPad growth.
Other partnerships and opportunities
Looking ahead, we are very excited about our agreement to purchase Beats Electronics and Beats Music. Music is part of Apple's DNA, and we think the addition of the Beats team will be great for music lovers. Beats provides Apple with a fantastic subscription music service, access to rare talent, and a fast-growing lineup of products that we can build upon.
Not counting Beats, we've completed 29 acquisitions since the beginning of fiscal year 2013, including five since the end of the March quarter, and we've brought some incredible technology, and more importantly some incredible talent, into Apple in the process.
We're hard at work and investing heavily on exciting opportunities across our business, and we have an incredible pipeline of new products and services that we can't wait to show you.
On the iPad decline
If you sort of back up from this, the category that we created, which has been in a little over four years, we've now sold 225 million iPads. Which is, I think, probably a larger number than anyone would have predicted at the time, including ourselves, quite frankly. We still feel the category as a whole is in its early days, and that there's also significant innovation that can be brought to the iPad, and we plan on doing that.
When I look at the top-level numbers, I get really excited when I see that more than 50 percent of the iPads we're selling are going to someone who's a first-time tablet buyer. I get excited when I see that the retail share according to NPD for the month of June was 59 percent in units, and over 70 percent in terms of dollars.
And of course, [CFO] Luca [Maestri] had mentioned in his preamble that our education share is 85 percent. We also are in virtually all Fortune 500 companies--99 percent of them to be exact--and 93 percent of the Global 500. However, when we dig into the business market deeper, though our market share in the U.S. in the commercial sector is good--it's 76 percent, according to IDC--the penetration in business is low, it's only 20. And to put that in some kind of context, if you looked at penetration of notebooks in business, it would be over 60.
So we think that there's a substantial upside in business, and this was one of the things behind the partnership with IBM that we announced last week. We think that the core thing that unleashes this is a better go-to-market, which IBM clearly brings to the table, but even more importantly, apps that are written with mobile-first in mind. Many of the--not all, but many of the Enterprise apps that have been written for iPad have been essentially ports from a desktop arrangement and haven't taken full advantage of mobile. And so we're excited about bringing that to business along with partnering with IBM which we think is a first-class company, and seeing what that can do to sales of business, which I honestly believe the opportunity is here.
The market is still predicted in 2018--I think these are Gartner numbers--to be about 350 million in size, and to put that in some context, I think the PC market right now is about 315. And so I think our theory that has been there, honestly, since the first time we shipped iPad--that the tablet market would eventually surpass the PC market--that theory is still intact. I just think we have to do some more things to get the business side of it moving in a faster trajectory, and I think we're now on to something that can really do that.
So as I look at it, and back up from the 90-day clock kind of thing, I'm incredibly excited, excited about the plans we have on the product side, and also on the go-to-market side, in particular the IBM announcement.
One other point I might add on this, because I think this is interesting and I don't think it came out in our commentary so far is the market's very bifurcated on iPad. In the BRIC countries, iPad did extremely well. The growth was very high. Like in China, it was in the 50s, the Middle East it was in the 60s; in the developed countries, like the U.S., the market is clearly weaker there. It's interesting to note, however, that those--the U.S. as an example--we had a very, very strong Macintosh market in the U.S.
And so there's probably a bit of higher-ed kind of stuff beginning to play out too, where higher-ed is clearly still very much notebook-oriented; K-12, on the other hand, we sell 2.5 iPads for every Mac into K-12. And so we're headed, we're clearly headed into that season now, and it typically starts in graduation time-frame in fiscal Q3 for us, and that's probably another thing that we're seeing.
China, honestly, was surprising to us. We thought it would be strong, but it well went past what we thought. We came in at 26 percent revenue growth, including at retail. And if you look at the units, the unit growth was really off the charts across the board. iPhone 48 percent off; that compares to a market estimate of 24, so growing at 2x the market. iPad was up as well, as I've mentioned before. The Mac was up 39 percent, and that's versus a market in China that's also contracting along with markets in those parts of the world; in China it was projected to contract by five percent.
And so we're seeing some substantial strength there, and the thing that's actually growing the most is the iTunes software and services category, which has the App Store in it, and that area is almost doubling year over year. And so it's very very exciting what we're seeing there.
We are still in the process of rolling out, along with our partner China Mobile, the TD-LTE, into more cities, and so we're still in the early going on that, that just started in January, as you know. And my understanding is that later this year, there will be a license for the other operators to begin shipping FTD-LTE, which I think is another big opportunity in China.
On trade-in programs for the iPhone
What I think is happening in the aggregate as you look across the world is that trade-ins are actually hugely beneficial for our ecosystem because people wind up...we have more people that are able to join the party when we have a trade-in, because in essence it winds up being used by...the prime example is someone else within the family, or in the example that has become more common in the last year, someone trades it in, and that goes to either somebody else in that country that is very price-sensitive, or somebody in a different country.
And I see all of this as good. In looking at how much of it cannibalizes, it is very hard to answer that question with any degree of preciseness, but my gut is that the cannibalization factor is low. Because you wind up attracting people who are much more price-sensitive than there. The great thing is, our products command a much higher resale value than others do, and so that leads to a larger trade-in, and from my perspective, that means a larger ecosystem, more people that wind up getting on iPhone, and as you know from following us for quite some time, if we get somebody to try an Apple product, and then buy an Apple product, the likelihood that they begin buying other Apple products that may be in different categories, or upgrading to one in that category in the future is very high. And so, net, I view it to be positive; it's very difficult to quantify with certainty.
On the iPhone 5c
I can tell you this: that if you look at the growth rates--we don't divide out each one, but if you look at the year over year growth rate--and so this would be comparing the 5c to last year, it would be comparing the 4s, which was in the mid-tier--the growth in that sector was the highest growth during the quarter we just finished, of the three tiers. And so we're extremely happy with how it performed last quarter.
On component costs and gross margin
What we saw in the June quarter was that NAND, mobile DRAM, and LCD, the pricing on all of those declined, while PCDRAM increased despite the market for PCs contracting. In the September quarter, what is factored in our guidance is that LCDs and mobile DRAM continue to decline, that NAND pricing remains essentially flat from last quarter, and that PC DRAM has a slight price increase.
And in terms of other commodities that I didn't talk about, we've assumed that they would decline at historical rates. And so that's factored in the gross margin guidance.
On upgrading
In terms of the installment plans that you mentioned in the U.S. relative to iPhone, there's a lot of different models that are being tried in the U.S. and throughout the world. And actually, last quarter, as we estimated, and this is subject to estimating error, but we estimated less than one out of four iPhones were sold on a traditional subsidy plan. That number is markedly different than it would have been two years ago. The installment plans that you're speaking about, which gives a customer the right to upgrade fast or faster than a usual two-year cycle, we think that plays to our customer base in a large way.
And so that makes us incredibly bullish that customers on those plans would be very likely to upgrade when we announce a new product.
On acquisitions and partnerships
We have a lot of really great people and I think we have the capability to acquire a sizeable company and manage it, and relative to IBM, I feel the same way. I think you can only do so many partnerships well, and it's unusual that we enter into a partnership. But in this particular case, I think arguably the companies are so complimentary. And I've gotten to know Ginni [Rometty, CEO of IBM] fairly well over the last couple of years, and I think we see the importance of the customer a lot of the same way and both feel that mobile and enterprise is just an enormous opportunity.
We're not competing with each other, and so I think a partnership in that case is particularly great.
Would we do more of either of the things we did? We're always looking in the acquisitions space, but we don't let our money burn a hole in our pocket and we don't do things that aren't strategic. With Beats, we felt we were getting an incredible subscription service, a very rare set of talent that we think can do great things at Apple, and access to a very fast-growing business in their headphone and earphone space. And so culturally, we felt there was a match and music has been deeply embedded in Apple's DNA for many many years.
And so it was a great marriage, and I think the partnership with IBM is a great marriage as well. If more like that presented themselves, then I think that we can manage more things. I think we have a very, very strong executive team, and can do that. But it's not my goal to acquire a certain number of companies or spend a certain amount of money. We want to do things that help us make great products and are great for our customers, and so forth.
On IBM and analytics
We didn't talk about how the business model is going to work, but generally speaking I think that each of us have revenue streams in the Enterprise, and each of us win from having those revenue streams. And so that's our look at that. We win if we can drive that penetration number I spoke about from 20 to 60, that would be incredibly exciting here. The walls would shake! And so that's what I hope for.
Direct app sales to enterprise
We have no plans to change the rules with Enterprise. Some enterprises write proprietary apps that they do not want to offer to others, and so we obviously have a way for them to distribute those into their enterprise on just the employees that they want to, and so I'm not worried about changing that.
We're all for taking friction out of the system and not adding it. Again, the big thing for us is getting the penetration number up and getting our products, iPhones and iPads and Macs, in more people's hands, and we think there's a huge opportunity in Enterprise to do that.
Tags: Apple Share this article | 科技 |
2016-40/3983/en_head.json.gz/9323 | You are hereDNR Wildlife & Habitat Wildlife Species Birds Kirtland's Warbler (Dendroica kirtlandii)
Jack Pine Habitat
Brown-headed Cowbirds
Bahamian Connection
The Recovery Plan
Education and Nature Tourism
Singing Male Census Results
Non-DNR Links
The endangered Kirtland's warbler is one of the rarest members of the wood warbler (Parulidae) family. It is a bird of unusual interest for many reasons. It nests in just a few counties in Michigan's northern Lower and Upper peninsulas, in Wisconsin and the province of Ontario and, currently, nowhere else on Earth. Its nests generally are concealed in mixed vegetation of grasses and shrubs below the living branches of five to 20 year old jack pine (Pinus banksiana) forests. The male Kirtland's warblers' summer plumage is composed of a distinctive bright yellow colored breast streaked in black and bluish gray back feathers, a dark mask over its face with white eye rings, and bobbing tail. The female's plumage coloration is less bright; her facial area is devoid of a mask. Overall length of the bird is less than six inches.
Because of its restricted home range and unique habitat requirements, the Kirtland's warbler probably has always been a rare bird. Scientists did not describe the species until 1851 when a male was collected on the outskirts of Cleveland, Ohio. That first specimen was sent to the Smithsonian Institution in Washington, D.C. The species eventually was named in honor of Dr. Jared P. Kirtland, a physician, teacher, horticulturist, and naturalist who authored the first lists of birds, mammals, fishes, reptiles, and amphibians of Ohio. The winter range of the Kirtland's warbler was discovered in 1879 when a specimen was collected on Andros Island in the Bahama Islands archipelago. All sightings or collections of wintering Kirtland's warblers since then have been in the Bahamas and in the Turks, Caicos, and Hispaniola islands. Because of its subtle fall and winter dull brown plumage and behavior, population information on the warbler's winter grounds is scarce. Additional research, education, and public outreach are required during the warbler's eight month stay in the Bahamas. Kirtland's warblers are one of more than 200 neo-tropical migratory species that nest in North America and winter in the tropics.
It was not until 1903 that Norman A. Wood discovered the first nest in Oscoda County in northern lower Michigan. Until 1996, all nests were found within 60 miles of this site. Since then, a small number of nests have been found each year in Michigan's Upper Peninsula. Nesting also has occurred in Wisconsin and the province of Ontario.
The diet of the warbler includes many different insect species at various developmental stages, including caterpillars, butterflies, moths, flies, grasshoppers, as well as ripe blueberries, when in season. Breeding
Male Kirtland's warblers arrive back in Michigan from the Bahamas between May 3 and May 20, a few days ahead of the females. The males establish and defend territories and then court the females when they arrive. The males' song is loud, yet low pitched, ending with an upward inflection. As the female builds a nest of leaves and grass, lined with mosses or deer hair, the male begins to bring her food. This duty continues through laying and the incubation process, with which the males rarely help. Four to five cream white eggs speckled and blotched with brown are laid in late May, followed by an incubation of 13-16 days. Both parents feed the chicks, which grow quickly and have left the nest within nine days, staying in the undergrowth and lowest branches of the trees. Within five weeks, the parents have ceased feeding their young.
The jack pine forest community provides the primary nesting habitat for the Kirtland's warbler. This forest species is adapted to dry land conditions and has been present on the sandy outwash plains of northern Michigan since the retreat of the Wisconsin ice sheet about 14,000 years ago. A narrow, band of jack pine habitat can be found across the north central states and the province of Ontario.
The Kirtland's warbler has very restrictive habitat requirements. In addition to being ground nesters, Kirtland's warblers prefer jack pine stands over 80 acres in size. Those stands, which are most suitable for breeding, are characterized by having dense clumps of trees interspersed with numerous small, grassy openings, sedges, ferns, and low shrubs. The birds nest on the ground under the living branches of the small trees. Jack pine stands are used for nesting when trees are about five feet high or about five to eight years of age. Nesting continues in these stands until the lower branches of the trees start dying, or when the trees reach a height of 16 to 20 feet (about 16 to 20 years of age). A breeding pair of warblers usually requires about six to ten acres for their nesting territory, although as little as 1.5 acres may be adequate under optimal conditions.
Nearly all nesting occurs in jack pine stands where the soil type is Grayling sand. This is an extremely well drained sandy soil with low humus and nutrient content. Water percolates through the sand so quickly that nests seldom are flooded during a rainstorm. This soil also supports the plant community required for nesting habitat.
Fire always has been an important disturbance factor in the jack pine barrens. The young jack pines upon which the Kirtland's warbler depends grow after fire removes older trees and rejuvenates the forest. Heat from fire opens jack pine cones to release seeds. Fire also prepares the ground for the germination of the seeds.
Historically, the jack pine barrens were maintained by naturally occurring wildfires that swept through the region. The jack pine held little value for the lumbermen who came in search of white pine. Once logging activity ended in the 1880's, the continuing forest fires helped increase the range of jack pine, which created more nesting habitat. As a result, the Kirtland's warbler population reached its peak between 1885 and 1900.
With the advent of modern fire protection and suppression efforts, forest management practices did not emphasize the regeneration of jack pine. Consequently, there was a drastic decline of available warbler nesting habitat, and its numbers plummeted. In order to provide appropriate habitat for the Kirtland's warbler, the U.S. Department of Agriculture Forest Service and the Michigan Department of Natural Resources created four areas within state and national forests to be managed specifically for Kirtland's warbler nesting habitat between 1957 and 1962. By 1973, these areas contained 53% of the nesting population.
It was clear that providing more jack pine areas would be necessary to increase the Kirtland's warbler population. During the mid 1970s, some 134,000 acres of jack pine were designated for management as Kirtland's warbler nesting habitat within 24 management areas of state and national forests. Additional lands were added through the 1990's to bring the total public land specifically set aside for the Kirtland's warbler to more than 150,000 acres.
Jack pine stands are managed by logging, burning, seeding, and replanting on a rotational basis to provide approximately 38,000 acres of productive nesting habitat at all times. By carrying these stands to a 50 year rotational age, nesting habitat can be maintained for the warblers with little sacrifice to the commercial harvest of jack pine. These jack pine stands also provide habitat for the upland sandpiper, Eastern bluebird, white tailed deer, black bear and snowshoe hare, and for several protected prairie plants, including the Allegheny plum, Hill's thistle, and rough fescue. Unfortunately, the jack pine habitat also provides a home for the brown headed cowbird, an undesirable nest parasite.
The brown headed cowbird (Molothrus ater), once called the "buffalo bird," was common in the open plains. Cowbirds followed the vast herds of American bison and then cattle, eating the insects that swarmed around the hoofs of the grazing herds. Unable to move with the wandering herds while maintaining a nest, these birds developed an unusual behavior; they began to lay their eggs in the nests of other birds. The cowbird chicks, which hatch earlier than most songbirds, are more aggressive and will out-compete their nest mates for food. This added competition reduces the number of non cowbird young that fledge.
As land in Michigan was opened up during logging and agricultural development, cowbirds moved into the new areas, and the Kirtland's warbler was an extremely vulnerable host. The egg laying activity of the cowbirds began to impact the Kirtland's warbler population.
Studies have revealed that when one cowbird egg is laid in a warbler nest, only one to three warbler chicks may survive. If two cowbird eggs are laid and hatched in a warbler's nest, none of the warbler chicks survive. Heavy cowbird parasitism is believed to have been a major factor in the decline of the Kirtland's warbler population. In 1972, the U.S. Fish and Wildlife Service, in cooperation with the USDA Forest Service, Michigan Department of Natural Resources, and the Michigan Audubon Society, began controlling cowbirds with large live traps that are placed in Kirtland's nesting areas during spring and early summer. The traps, which are baited with millet, water, and several live cowbirds, are checked daily and any trapped cowbirds are euthanized. Non target species are released unharmed. Since 1972, an average of 4,000 cowbirds per year have been removed from Kirtland's warbler breeding areas.
Kirtland's warbler reproductive success has improved dramatically since cowbird trapping began. The nest parasitism rate has declined from the 1966 71 average of 69% to less than 5%. Average clutch size has increased from 2.3 eggs per nest to more than four. The average number of young warblers fledged per nest increased from less than one to almost three birds during the same period. The 2002 annual census counted over 1000 singing males for the second year in a row.
Nesting population size is estimated annually by counting the singing male Kirtland's warblers. The songs of the males are distinct, loud and melodious, and can be heard at a distance of one quarter mile. These counts of the singing males are doubled to determine an estimate of the nesting population. Biologists and other agency personnel, researchers, and volunteers conduct this survey. The first survey was conducted in 1951 and has been done annually since 1971. Additional jack pine areas in the Great Lakes region are being surveyed. Unmated males have been found in Wisconsin, Ontario, and Quebec.
In the late 1990s, a partnership that includes agencies in the U.S. and the Bahamas was formed to identify and protect habitats within the Bahamas that are used by wintering songbirds, including the Kirtland's warbler. The partners include The Nature Conservancy, Canon USA, the Bahamas Ministry of Agriculture and Fisheries, and the Bahamian National Trust. Protection of the wintering grounds includes the development of pine islands and controlling the impact on the use of the forested and broad leaved scrub areas by wild cats.
In 1997, State and Federal government agencies in Michigan, working in partnership, hosted a delegation from the Bahamas. The Bahamian delegates came to Michigan to review endangered species management and to learn more about the Kirtland's warbler, its summer nesting locations, and interagency management. This visit began an international effort to protect this bird in its summer, fall, and wintering habitats.
The following year, a research team of state and federal agencies joined international representatives in the Bahamas to discuss future Kirtland's warbler recovery projects. These projects include: (1) training birding groups in the identification and monitoring of Kirtland's warblers and other rare resident birds, (2) surveying the Bahamian chain of islands to identify critical wintering bird habitats and, (3) forming partnerships to support conservation work in the Bahamas and Michigan.
A Kirtland's Warbler Recovery Plan was developed in 1976, and updated in 1985, to provide state and federal agency personnel with a structured guide to direct management efforts toward increasing the Kirtland's warbler population. The primary recovery objective is to establish and sustain a Kirtland's warbler population throughout its known range at a minimum level of 1,000 pairs using adaptive management techniques. A major component of the original plan is habitat management. In 1981, the Kirtland's Warbler Management Plan for Habitat in Michigan was initiated. It identified strategies that are necessary to maintain and develop nesting habitat for the Kirtland's warbler, and established the following objectives:
DEVELOP and maintain some 36,000 40,000 acres of suitable nesting habitat for the Kirtland's warbler on a sustained basis. This will be done through planned rotation cuttings on 140,000 acres of jack pine stands within designated management areas.
PROTECT the Kirtland's warbler on its wintering grounds and along its migration route.
REDUCE key factors adversely affecting reproduction and survival of the Kirtland's warbler.
MONITOR breeding population of the Kirtland's warbler to evaluate responses to management practices and environmental changes.
Develop and implement emergency measures to prevent extinction.
The Kirtland's Warbler Recovery Team recognizes the need to review several factors necessary in managing this endangered species. These include the role of information and educational outreach in resource management, the development and increased tourism interest in the jack pine forests, the Au Sable River corridor, and the wintering grounds of the Kirtland's warbler. The team supports several educational events, projects and wildlife viewing opportunities. These include a video, guided Kirtland's warbler tours, annual Kirtland's Warbler Festivals, and the self guided auto tour through the jack pine ecosystem. Educational outreach is expected to continue in Michigan, and several new programs are being developed in the Caribbean. A source of continued funding will be necessary to maintain the current level of resource management and research, and is vital to expanding educational outreach efforts here as well as in the Bahamas.
We invite you to become a partner in helping the Kirtland's warbler by supporting the many efforts of the Kirtland's Warbler Recovery Team.
Staying out of posted nesting areas.
Camping only in designated campgrounds.
Staying with the tour guides and following their instructions.
Operating all vehicles only on open roads and designated trails within the area.
Leaving your pets in a safe area. Pets are not allowed to run in posted nesting areas.
Not using recordings or imitations of Kirtland's warbler songs to attract birds.
Learning more about endangered species and ways you can help them and their habitats.
Sharing this information with your family and friends.
Being extremely careful with fire.
Donating to the Nongame Fish and Wildlife Fund.
The Kirtland's warbler was first described in Ohio in 1851
It is commonly referred to as the jack pine warbler
This songbird is one of 56 species of wood warblers found in North America
Its nesting habitat is jack pine stands from 5 20 years old
It nests on the ground under living jack pine branches
Management of jack pine forests includes clear-cutting, fire, replanting, and reseeding
Adult Kirtland's warblers are lightweight birds, weighing 1/2 oz
Its average life expectancy is two years
The diet of the warbler includes many different insect species, as well as ripe blueberries.
Breeding males have plumage of blue gray with black streaks
Migrating at night, this wood warbler can come in contact with towers and other high structures
It spends the fall and winter seasons in the Bahamas
Cowbird management is necessary for the Kirtland's warbler's survival
Brown headed cowbirds are parasites of Kirtland's warbler nests
Kirtland's Warbler Singing Male Census Results 1951-2002
Click on image for a larger version.
Dendroica kirtlandii (University of Michigan, Museum of Zoology)
Species Profile (U.S. Fish & Wildlife Service)
Identification Tips & More (USGS Patuxent Wildlife Research Center)
Kirtland's Warbler (U.S. Forest Service)
Kirtland's Warbler Auto Tours (US Forest Service)
Related ContentMourning Dove (Zenaida macroura)American Goldfinch (Carduelis tristis)Brown-headed Cowbird (Molothrus ater)Common Crow (Corvus brachyrhynchos)Common Raven (Corvis corax)Spruce Grouse (Canachites canadensis)Black-backed Woodpecker (Picoides arcticus)Red crossbill (Loxia curvirostra)Wild Turkey (Meleagris gallopavo)Loggerhead Shrike (Lanius ludovicianus)Pileated Woodpecker (Dryocuopus pileatus)Scarlet Tanager (Piranga olivacea) | 科技 |
2016-40/3983/en_head.json.gz/9352 | Captain Planet: Old Ma River
Meredith Darlington December 30, 2009, 2:59 p.m.
In Old Ma River, the Planeteers decide to spend some time in India after an emergency mission brings them to the country. When everyone but Wheeler and his new friend Lita gets sick, the two must find out who or what is responsible.
Take the Old Ma River quiz!
*** "Captain Planet" is an animated series created by Ted Turner in the 1990s. MNN.com is the primary source for "Captain Planet" episodes on the Web. Captain Planet is a superhero whose powers are implemented by a team of five Planeteers (as well as himself), each of whom helps combat environmental catastrophes like pollution, animal poaching, water scarcity and more. Each episode of "Captain Planet" involves the captain and his planeteers tackling different dilemmas and nefarious eco-villains.
Numerous celebrities contributed voice work to the show, including Meg Ryan, Jeff Goldblum, Whoopi Goldberg and many others. You can watch all the Captain Planet episodes on MNN and you can learn more about the series by going behind the scenes with the creators and major players.
Captain Planet,
Are horse parts really used to make glue or is that just an icky rumor?
Ancient Greek algorithm could be used to find inconceivably large prime numbers | 科技 |
2016-40/3983/en_head.json.gz/9401 | Home > Press > Nanoparticles to probe mystery sperm defects behind infertility
This is boar sperm mixed with mesoporous silica nanoparticles that have been tagged with fluorescent green dye for identification. These nanoparticles were developed by Oxford University researchers to investigate 'mystery' cases of infertility. They can be loaded with any compound to identify, diagnose or treat the causes of infertility.
Credit: Natalia Barkalina/Oxford University
A way of using nanoparticles to investigate the mechanisms underlying 'mystery' cases of infertility has been developed by scientists at Oxford University.
Nanoparticles to probe mystery sperm defects behind infertility
Oxford, UK | Posted on November 15th, 2013The technique, published in Nanomedicine: Nanotechnology, Biology and Medicine, could eventually help researchers to discover the causes behind cases of unexplained infertility and develop treatments for affected couples. The method involves loading porous silica nanoparticle 'envelopes' with compounds to identify, diagnose or treat the causes of infertility.The researchers demonstrated that the nanoparticles could be attached to boar sperm with no detrimental effects on their function.'An attractive feature of nanoparticles is that they are like an empty envelope that can be loaded with a variety of compounds and inserted into cells,' says Dr Natalia Barkalina, lead author of the study from the Nuffield Department of Obstetrics and Gynaecology at Oxford University. 'The nanoparticles we use don't appear to interfere with the sperm, making them a perfect delivery vessel.'We will start with compounds to investigate the biology of infertility, and within a few years may be able to explain or even diagnose rare cases in patients. In future we could even deliver treatments in a similar way.'Sperm are difficult to study due to their small size, unusual shape and short lifetime outside of the body. Yet this is a vital part of infertility research, as senior author Dr Kevin Coward explains: 'To discover the causes of infertility, we need to investigate sperm to see where the problems start. Previous methods involved complicated procedures in animals and introduced months of delays before the sperm could be used.'Now, we can simply expose sperm to nanoparticles in a petri dish. It's so simple that it can all be done quickly enough for the sperm to survive perfectly unharmed.'The team, based at the Institute of Reproductive Sciences, used boar sperm because of its similarities to human sperm, as study co-author Celine Jones explains: 'It is similar in size, shape and activity. Now that we have proven the system in boar sperm, we hope to replicate our findings in human sperm and eventually see if we can use them to deliver compounds to eggs as well.'The research was an interdisciplinary effort, involving reproductive biologists from the Nuffield Department of Obstetrics & Gynaecology and nanoscientists from the Department of Engineering Science led by Dr Helen Townley.The study was funded by the Nuffield Department of Obstetrics & Gynaecology at Oxford University and the Engineering and Physical Sciences Research Council (EPSRC). This technique is the subject of patent applications held by Isis Innovation, Oxford University's technology transfer arm.###A report of the research, entitled 'Effects Of Mesoporous Silica Nanoparticles Upon The Function Of Mammalian Sperm In Vitro', is published in this month's Nanomedicine: Nanotechnology, Biology, and Medicine.The US provisional patent application number for the technique is 61/747781####About University of OxfordOxford University's Medical Sciences Division is one of the largest biomedical research centres in Europe, with over 2,500 people involved in research and more than 2,800 students. The University is rated the best in the world for medicine, and it is home to the UK's top-ranked medical school.From the genetic and molecular basis of disease to the latest advances in neuroscience, Oxford is at the forefront of medical research. It has one of the largest clinical trial portfolios in the UK and great expertise in taking discoveries from the lab into the clinic. Partnerships with the local NHS Trusts enable patients to benefit from close links between medical research and healthcare delivery.A great strength of Oxford medicine is its long-standing network of clinical research units in Asia and Africa, enabling world-leading research on the most pressing global health challenges such as malaria, TB, HIV/AIDS and flu. Oxford is also renowned for its large-scale studies which examine the role of factors such as smoking, alcohol and diet on cancer, heart disease and other conditions.
Contacts:University of Oxford Press Office44-018-652-80528
Copyright © University of Oxford
Interviews/Book Reviews/Essays/Reports/Podcasts/Journals/White papers
Patents/IP/Tech Transfer/Licensing
NIST Patents Single-Photon Detector for Potential Encryption and Sensing Apps September 16th, 2016
For first time, carbon nanotube transistors outperform silicon September 8th, 2016 | 科技 |
2016-40/3983/en_head.json.gz/9403 | Follow this link to skip to the main contentNASA - National Aeronautics and Space Administration› Help and PreferencesNASA Home | Centers | Marshall Home | Marshall News | News Releases | 2003SendBookmarkPrintSearch MarshallText Size For release: 03/24/03 Release #: 03-058 X-rays found from a lightweight brown dwarf Using NASA's Chandra X-ray Observatory, scientists have detected X-rays from a low mass brown dwarf — a faint, "failed star" lacking a central energy source — in a multiple star system as young as 12 million years old. This discovery is an important piece in an increasingly complex picture of how brown dwarfs evolve. The Marshall Center manages the Chandra program. Photo: Chandra image of the brown dwarf TWA 5B (NASA/CXC/Chuo U./Y. Tsuboi et al.) Using NASA's Chandra X-ray Observatory, scientists have detected X-rays from a low mass brown dwarf in a multiple star system, which is as young as 12 million years old. This discovery is an important piece in an increasingly complex picture of how brown dwarfs — and perhaps the very massive planets around other stars — evolve. Chandra's observations of the brown dwarf, known as TWA 5B, clearly resolve it from a pair of Sun-like stars known as TWA 5A. The system is about 180 light years from the Sun and a member of a group of about a dozen young stars in the southern constellation Hydra. The brown dwarf orbits the binary stars at a distance about 2.75 times that of Pluto's orbit around the Sun. This is first time that a brown dwarf this close to its parent star(s) has been resolved in X-rays. "Our Chandra data show that the X-rays originate from the brown dwarf's coronal plasma which is some 3 million degrees Celsius," said Yohko Tsuboi of Chuo University in Tokyo and lead author of the April 10th issue of Astrophysical Journal Letters paper describing these results. "The brown dwarf is sufficiently far from the primary stars that the reflection of X-rays is unimportant, so the X-rays must come the brown dwarf itself." TWA 5B is estimated to be only between 15 and 40 times the mass of Jupiter, making it one of the least massive brown dwarfs known. Its mass is rather near the currently accepted boundary (about 12 Jupiter masses) between planets and brown dwarfs. Therefore, these results may also have implications for very massive planets, including those that have been discovered as extrasolar planets in recent years. "This brown dwarf is as bright as the Sun today in X-ray light, while it is fifty times less massive than the Sun," said Tsuboi. "This observation, thus, raises the possibility that even massive planets might emit X-rays by themselves during their youth!" This research on TWA 5B also provides a link between an active X-ray state in young brown dwarfs (about 1 million years old) and a later, quieter period of brown dwarfs when they reach ages of 500 million to a billion years. Brown dwarfs are often referred to as "failed stars," as they are believed to be under the mass limit (about 80 Jupiter masses) needed to spark the nuclear fusion of hydrogen to helium, which characterizes traditional stars. Scientists hope to better understand the evolution of magnetic activity in brown dwarfs through the X-ray behavior. Chandra observed TWA 5B for about three hours on April 15, 2001, with its Advanced CCD Imaging Spectrometer (ACIS). Along with Chandra's mirrors, ACIS can achieve the angular resolution of a half arc second. "This brown dwarf is about 200 times dimmer than the primary and located just two arcseconds away," said Gordon Garmire of Penn State University who led the ACIS team. "It's quite an achievement that Chandra was able to resolve it." Other members of the research team included Yoshitomo Maeda (Institute of Space and Astronautical Science, Kanagawa, Japan), Eric Feigelson, Gordon Garmire, George Chartas, and Koji Mori (Penn State University), and Steve Prado (Jet Propulsion Laboratory). NASA's Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program, and TRW, Inc., Redondo Beach, Calif., is the prime contractor for the spacecraft. The Smithsonian's Chandra X-ray Center controls science and flight operations from Cambridge, Mass., for the Office of Space Science at NASA Headquarters, Washington. Images and additional information about this result are available at: http://chandra.harvard.edu and http://chandra.nasa.gov For more information: News release Photo Fact sheet More Chandra news Contact Steve Roy Public Affairs Office (256) 544-0034 Megan Watzke Chandra X-ray Obs. Center (617) 496-7998 E-mail Get releases sent directly to you! Contact: Betty Humphery › Back To Top | 科技 |
2016-40/3983/en_head.json.gz/9404 | NASA - National Aeronautics and Space Administration› More PreferencesNASA Home | Multimedia | Interactive FeaturesSendBookmarkPrintFeatureText SizeScience Fiction or Science Fact? Here's a short quiz to test your knowledge of what's real and what isn't in the area of space travel and the search for extraterrestrial life.
1. We have strong evidence that that our solar system is not the only one; we know there are many other Suns with planets orbiting them. SCIENCE FACT. Improved telescopes and detectors have led to the detection of dozens of new planetary systems within the past decade, including several systems containing multiple planets.
One giant leap for bug-kind 2. Some organisms can survive in space without any kind of protective enclosure.
SCIENCE FACT.
In a European Space Agency experiment conducted in 2005, two species of lichen were carried aboard a Russian Soyuz rocket and exposed to the space environment for nearly 15 days. They were then resealed in a capsule and returned to Earth, where they were found in exactly the same shape as before the flight. The lichen survived exposure to the vacuum of space as well as the glaring ultraviolet radiation of the Sun.
Hot real estate
3. Organisms have been found living happily in scalding water with temperatures as high as 235 degrees F. SCIENCE FACT.
More than 50 heat-loving microorganisms, or hyperthermophiles, have been found thriving at very high temperatures in such locations as hot springs in WyomingÕs Yellowstone National Park and on the walls of deep-sea hydrothermal vents. Some of these species multiply best at 221 degrees F, and can reproduce at up to 235 degrees F. Has E.T. already phoned home?
4. We now have evidence that some form of life exists beyond Earth, at least in primitive form.
While many scientists speculate that extraterrestrial life exists, so far there is no conclusive evidence to prove it. Future missions to Mars, the Jovian moon Europa and future space telescopes such as the Terrestrial Planet Finder will search for definitive answers to this ageless question.
5. We currently have the technology necessary to send astronauts to another star system within a reasonable timespan. The only problem is that such a mission would be overwhelmingly expensive. SCIENCE FICTION.
Even the the unmanned Voyager spacecraft, which left our solar system years ago at a breathtaking 37,000 miles per hour, would take 76,000 years to reach the nearest star. Because the distances involved are so vast, interstellar travel to another star within a practical timescale would require, among other things, the ability the move a vehicle at or near the speed of light. This is beyond the reach of today's spacecraft -- regardless of funding.
Fellowship of the rings
6. All of the gas giant planets in our solar system (Jupiter, Saturn, Uranus and Neptune) have rings.
Saturn's rings are the most pronounced and visible, but they aren't the only ones. May the force be with you
7. In the "Star Wars" films, the Imperial TIE Fighters are propelled by ion engines (TIE stands for Twin Ion Engine). While these spacecraft are fictional, real ion engines power some of todayÕs spacecraft.
Ion propulsion has long been a staple of science fiction novels, but in recent years it has been successfully tested on a number of unmanned spacecraft, most notably NASAÕs Deep Space 1. Launched in 1998, Deep Space 1 rendezvoused with a distant asteroid and then with a comet, proving that ion propulsion could be used for interplanetary travel. A question of gravity
8. There is no gravity in deep space.
If this were true, the moon would float away from the Earth, and our entire solar system would drift apart. While itÕs true that gravity gets weaker with distance, it can never be escaped completely, no matter how far you travel in space. Astronauts appear to experience "zero-gravity" because they are in continuous free-fall around the Earth.
9. The basic premise of teleportation -- made famous in TVÕs "Star Trek" -- is theoretically sound. In fact, scientists have already ÒteleportedÓ the quantum state of individual atoms from one location to another. SCIENCE FACT.
As early as the late 1990s, scientists proved they could teleport data using photons, but the photons were absorbed by whatever surface they struck. More recently, physicists at the University of Innsbruck in Austria and at the National Institute of Standards and Technology in Boulder, Colorado, for the first time teleported individual atoms using the principle of quantum entanglement. Experts say this technology eventually could enable the invention of superfast "quantum computers." But the bad news, at least for sci-fi fans, is that experts donÕt foresee being able to teleport people in this manner.
Good day, Suns-shine
10. Tatooine, Luke Skywalker's home planet in the "Star Wars" films, has two Suns -- what astronomers would call a binary star system. Scientists have discovered recently that planets really can form within such systems.
Double-stars, or binary systems, are common in our Milky Way galaxy. Among the more than 100 new planets discovered in recent years, some have been found in binary systems, including16 Cygni B and 55 Cancri A. (But so far, no one has found a habitable planet like Luke Skywalker's Tatooine.)
+ View Flash Version of the Quiz | + Visit PlanetQuest | 科技 |
2016-40/3983/en_head.json.gz/9463 | South Korea Is Building 'Invisible' Skyscraper
Uses cameras and LED screens to disappear from sight
Ruth Brown,
(GDS Architects)
Superpowered skyscrapers seem to be all the rage these days. London has one that can melt cars, and now South Korea is planning to build one with the power of invisibility. The country's government recently gave the go-ahead for the construction of a 1,476-foot structure called "Tower Infinity" in the city of Incheon, reports the Wall Street Journal. The tower, designed by a US-based architecture firm, is expected to house a movie theater, roller coaster, and water park, reports CNN. But by far its coolest features is that it will "disappear."
No magic or mutant abilities will be needed: The trick is performed via 500 LED screens, and cameras at three different heights on six sides of the building. The cameras capture what's behind the building, then display it on the screens, tricking observers' eyes into making it look like the building isn't there. But there are some caveats: It will only work at certain times of day and at certain angles, and will only be turned on for short periods at specific times, reports the Journal. And though it may sound like a plane crash waiting to happen, the project's backers say it's safe for air traffic, because even in invisibility mode it will still have to keep its red warning lights on. (A new 47-story building in Spain is facing a bizarre design issue, however.)
Navy Yard Death Toll Hits 12
South Korea Is Building 'Invisible' Skyscraper is...
HMD-SMD-ITY
Fine until a 747 flies into it, accidentally this time. BinThereDunThat
Is this gonna work like a hologram? :)
Econ_101
This makes buildings dis...... Could we use it on people and do the same with O...... | 科技 |
2016-40/3983/en_head.json.gz/9472 | Researchers Create Free, Downloadable Software Radio Design Tool
Released: 16-Nov-2004 11:00 AM EST
Source Newsroom: Virginia Tech
Software, Radio, Open Source, Wireless, Communication, Emergency + Show More
Newswise — The Mobile and Portable Radio Research Group (MPRG) in Virginia Tech's Bradley Department of Electrical and Computer Engineering has developed the fundamental software for use in designing software radios and is offering this tool free to other wireless communications researchers throughout the world. "The tool available on the Virginia Tech website already has been downloaded by numerous companies and universities from around the world," said Jeffrey Reed, professor of electrical and computer engineering and deputy director of the MPRG."Software radio technology is today where personal computer technology was in the 1970s," said Max Robert, the MPRG post-doctoral Fellow who led development of the new tool, "OSSIE" (Open-Source Software Communication Architecture Implementation: Embedded). Software radios can be any devices that use wireless radio frequency transmission and reception for communications — including cell phones, walkie-talkies, televisions, AM-FM radios, cordless phones, garage door openers, radar, satellites, shortwave radios, pagers and GPS (global positioning systems), to name a few. Currently, radios of all kinds perform their signal processing — transmitting and receiving — based on dedicated hardware. A combination TV/AM-FM radio operates with two separate radios, one to receive television broadcasts and the other to receive radio broadcasts. Similarly, a combination garage door/car door opener has to be constructed with two distinct transmitters. This dependence on dedicated hardware limits the function of a radio. For example, a fire chief using a walkie-talkie to contact the walkie-talkie carried by a policeman in a burning building has to hope that the two devices have the same type of dedicated hardware. Using a software radio, the fire chief could simply load in software designed to communicate with the policeman's device. This transition would be possible if the signal processing capability were defined by software, rather than by dedicated hardware. In addition, the fire chief's software radio could communicate with a variety of other devices, such as cell phones. The concept of software radios has been especially attractive to the U.S. Department of Defense, which years ago established the Joint Tactical Radio System (JTRS) to create general purpose hardware that can operate as software-defined radios. This is where MPRG's OSSIE comes into play. OSSIE is an operating environment, or software framework, that is compatible with the JTRS military hardware and is written in C++, a computer programming language commonly used by wireless researchers. OSSIE is an environment within which software radios can be programmed and can operate.MPRG's Robert and a team of graduate students first developed OSSIE as a tool for a software radio research project sponsored by the Office of the Director of the Central Intelligence Agency. Robert and Reed soon realized that other researchers could use OSSIE in their development of software radios. They also realized that pooling software with other researchers would add to a collective knowledge base for the creation of a variety of working software radios.MPRG has made OSSIE an open-source tool, which means that researchers can download it for free and, in turn, are responsible for sharing their findings for free with other researchers."Offering OSSIE as an open-source tool over the Internet will speed up growth of the technology and make faster innovations possible," Robert said. "This will benefit all wireless researchers who are working to develop software radios." Researchers can download OSSIE from the Virginia Tech MPRG Web site athttp://www.mprg.org/research/ossie. Permalink to this article | 科技 |
2016-40/3983/en_head.json.gz/9476 | Learning from the Legacy of a Catastrophic Eruption
By Michelle Nijhuis
Many people fled Mt. St. Helens when it began to erupt, in 1980, but a group of ecologists was already wondering how to get closer. Credit Photograph by Jim Valance / USGS / Cascades Volcano Observatory / Reuters On the morning of May 18, 1980, in southwestern Washington State, an earthquake on the northern flank of Mt. St. Helens released an immense landslide, a blast of superheated gas and rock, and a fifteen-mile-high plume of ash. The eruption, which killed fifty-seven people and destroyed two hundred and fifty homes, was the most destructive volcanic event in U.S. history, and when it began most Washingtonians tried to get as far away from the mountain as they could. But Jerry Franklin, then a forestry professor at Oregon State University, was already wondering how to get closer. Ecologists had plenty of theories about how natural systems reacted to large, intense disturbances; here, at last, was a chance to test them. Less than a week after the eruption, when Franklin and several other researchers flew over the blast zone, they saw a grayscale landscape quilted with downed trees. “Our working hypothesis was that everything was totally sterilized,” he told me recently. But, in early June of that year, when they were finally able to set foot on the mountain, they saw ants, beetles, aphids, and pocket gophers on the move. Hundreds of fireweed plants were pushing up through the ash. In late July, when Franklin and a colleague took a helicopter trip to Meta Lake, just a few miles from the still-grumbling crater, their pilot caught a trout. On a cool, cloudy morning late last month, Franklin and a group of his fellow-ecologists stood at the edge of Meta Lake. Thirty-five years earlier, many had interrupted careers, lives, and even marriages in order to turn their attention to Mt. St. Helens. They had fought Forest Service and timber-company plans to rapidly replant and reseed the area damaged by the eruption, arguing in petitions and before Congress that at least some of it should be left undisturbed for research. (Their efforts led to the establishment, in 1982, of Mt. St. Helens National Volcanic Monument.) In the years since, they had watched elk wander back into the blast zone, dogwoods grow from seeds hidden in the roots of overturned trees, and lupines sprout on the barren pumice plain, the tongue of cooled ash and rock that extends from the volcano’s crater. Every five years, they have spent several days camped together near the mountain—initially to share their results with one another, but increasingly to share their recollections with younger researchers. This summer’s gathering began with eulogies: Jim Sedell, a Forest Service researcher who died in 2012, was one of the first researchers to visit the mountain after the eruption, and the first to sample its lakes. John Edwards, a University of Washington ecologist, who also died in 2012, had observed that spiders were ballooning into the blast zone on streams of silk, often from more than thirty miles away. (He somewhat whimsically calculated that they were arriving on the pumice plain at a rate of eight-tenths of a spider per square metre per day). Addressing the gathering, Charlie Crisafulli, a Forest Service ecologist who has studied small mammals, birds, insects, amphibians, and plants on the mountain since the eruption, remarked that he had been twenty-two when he arrived at Mt. St. Helens, and would soon turn fifty-eight. He reminded his colleagues that his office had recently purchased heavy-duty fence posts to mark research plots, and that his crew members were available to record the G.P.S. locations of study sites. “We want to make sure that we transfer this work to a new generation,” he said. Long-term ecological research is not a glamorous field: its results unfold slowly, often over the course of several careers. For young researchers under pressure to publish new findings, it can be a difficult sell. Still, some are signing up to extend the story of an eruption that they are too young to remember. Cynthia Chang, now a professor of ecology at the University of Washington at Bothell, is one of those people. She was in graduate school when she read a paper by the University of Washington plant ecologist Roger del Moral; she moved across the country to continue and expand his decades of research on the pumice plain. “To have a thirty-year uninterrupted data set—in ecology, that’s amazing,” she told me. Joseph Antos and Donald Zobel, who study the long-term effects of volcanic ash on the forests around Mt. St. Helens, have found a successor in Dylan Fischer, a professor at Evergreen State College, in Olympia, Washington. Fischer pointed out that, even thirty-five years later, the forests are responding to the eruption in complex ways. “The only people who think all the questions here have been answered are those who haven’t looked yet,” he said. Jerry Franklin, now in his late seventies and a professor at the University of Washington, has come to appreciate the importance of persistence. The assortment of plants and animals that happened to survive the eruption underground, or in patches of snow, has had an unexpectedly strong influence on how the forests look today. Had the cataclysm happened in a different season, or even at a different time of day, the present would be different, too; most of the tall firs now surrounding Meta Lake, for example, were young trees at the time of the eruption, and survived only because the snow was still deep enough to protect them. “We expected the story to be one of invasion, of things arriving from elsewhere,” Franklin said. “But, in most places on the mountain, there were also these incredible living legacies.” Understanding the influence of those legacies on Mt. St. Helens has changed the field of ecology, and it has changed Franklin. Even twenty-five years after the eruption, he said, he still thought of the forests around the volcano as recovering from a disaster. “Finally, I looked around and thought, ‘This place is absolutely alive. Why would I want it to recover?’ “ More:
The Daily Watch: Comma Queen Mary Norris clarifies the difference between the hyphen, the en dash, and the em dash.
Living with Wildfire
Annals of Seismology
The Really Big One | 科技 |
2016-40/3983/en_head.json.gz/9510 | Heat Waves May Compound Global Warming
September 22, 200512:00 AM ET
Europe's devastating heat wave of 2003 killed an estimated 35,000 people. Research published in the journal Nature says the heat also put huge amounts of heat-trapping carbon dioxide into the atmosphere, causing concern that it accelerated global warming.
STEVE INSKEEP, host: The death toll from Hurricane Katrina currently stands at more than 1,000, but that is by no means the biggest climate disaster of this decade. Two summers ago, Europe was gripped by a summer heat wave that killed an estimated 35,000 people. Research published in the journal Nature says that heat wave also ended up putting a huge amount of carbon dioxide into the atmosphere. That could contribute further to global warming, as NPR's Richard Harris reports. RICHARD HARRIS reporting: Europe's killer heat wave also caused massive wildfires. It withered crops and forests, it brought a severe drought and it raised summer temperatures by an average of 12 degrees across the region. Mr. PHILIPPE CIAIS (Laboratory of Climate Science and the Environment): I was in France during the heat wave and it was quite impressive to see the impact on health and also the trees becoming brown and some of them even dying. HARRIS: Philippe Ciais has a special interest since he studies how plants take up carbon dioxide and give it off into the environment. He works at France's Laboratory of Climate Science and the Environment. Mr. CIAIS: So I got the idea of trying to estimate with my colleagues the impact of this climate spell on plants. HARRIS: The researchers looked at monitors from throughout Europe that measure the amount of carbon dioxide coming up from the ground, and they gathered information about crops from the region. Normally, plants in Europe soak up a lot more carbon dioxide than they produce. But Ciais now reports that was clearly not the case during the heat wave and drought of 2003. Mr. CIAIS: We have estimated that during that particular year the consequence of these extreme summer conditions--the plants in Europe, they have outgassed about a half a billion ton of carbon into the atmosphere. HARRIS: To put that in perspective, human activities in Europe usually contribute 1 billion tons a year. So this is half as much as the carbon dioxide that pours from tail pipes, chimneys and smokestacks. It normally takes the green plants in Europe four years to soak up that much carbon dioxide. Mr. CIAIS: So suddenly you have a climate spell and you are undoing, essentially, everything which was taken up during four consecutive years before. HARRIS: Ciais cares about carbon dioxide because this gas traps heat in the atmosphere. It's been building up rapidly as a result of human activities. The rate would be even faster if plants didn't capture a lot of the carbon dioxide that we emit. Now this finding suggests that droughts and heat waves can actually accelerate the process of climate change. Dennis Baldocchi at the University of California Berkeley says climate modelers expect to see a lot more summers like that in Europe. Mr. DENNIS BALDOCCHI (University of California Berkeley): This extreme event here will be somewhat normal in 30 or 50 years from now. The big question we all want to know is: Can forests adapt to this rapid change in warming? HARRIS: Baldocchi notes that forests in the United States have adapted to much warmer temperatures than those typical in Europe. And it's also true that carbon dioxide is a plant fertilizer. So in the absence of extreme events some scientists say plants might actually grow more vigorously as this gas continues to build up. Baldocchi says that may or may not prove to be the case, and he notes that the drought also seems to have harmed the soils in Europe and, as a result, the plants didn't bounce back entirely the following year even though the weather was much more favorable. Mr. BALDOCCHI: So these are also interesting questions. You know, if you have an extreme event, does it have lingering effects down the road? And the initial data's starting to show that. HARRIS: So it remains to be seen whether plants will help slow the buildup of carbon dioxide in the environment or whether eventually they will become another part of the problem. Richard Harris, NPR News. INSKEEP: This is NPR News. Copyright © 2005 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information. | 科技 |
2016-40/3983/en_head.json.gz/9604 | The Macalope Daily: Motivations PC Advisor
Network & Wi-fi Opinion
The Macalope Daily: Motivations
What motivates a company? Well, turns out it might be the same thing that motivates someone to write a diatribe about Apple.
The Macalope
Virginia Heffernan gives us an object lesson in this principle, writing for Yahoo in "Machine Politics" (whatever that is): "Apple's Map app non-apology: We're sorry you feel that way" (tip o' the antlers to Matthew Smith).
Non-apology? Here's what Cook said:
Now, the Macalope knows what a non-apology looks like and this isn't that.
The point of offering a non-apology is to artfully avoid any implication of personal blame. "I'm sorry you feel that way. It certainly isn't because of anything I've done, but I'm sorry you feel that way. Possibly it's because you're a tremendous reactionary who takes everything the wrong way and has a lot of personal issues. Are you in therapy? You should be. I'm sorry you need therapy because of what I'm guessing was a bad childhood."
That's a non-apology. In Cook's actual apology, however, the "this" in third sentence is "we fell short on our commitment." It's not "your stupid feelings are so sensitive," it's something that Apple did. What he's saying is "I'm sorry what we did caused you frustration." It's a little odd that Cook splitting it out into two sentences seems to have confused a former New York Times staff writer who has also worked at Harper's, The New Yorker, and Slate and has a fricking Ph.D in English Literature from Harvard.
And it's really odd that it takes a comic drawing of a mythical man-Mac-antelope hybrid to explain it to her. But that seems to be where we are.
What bugged users about Apple Maps was not that it was imperfect.
Really? That wasn't it? Because the Macalope's followed this rather closely--admittedly, maybe not as closely as someone who covers "Machine Politics"--and he's pretty sure that's it.
Every app we use is imperfect.
Indeed. Then why the uproar, doctor?
Instead, what was maddening was that you, Apple, turned so petty, arrogant and spiteful that you tried to drive the renowned marvel that is Google Maps--which since 2005 has tested, refined and made stunningly useful its high-res aerial and satellite images of virtually the whole planet--off your dumb new operating system, iOS6.
Wait. You're actually saying that if the data in the Maps app had been perfect, people would have still been mad because of a business decision Apple made?
Uh, no. Really, people don't care where their map data comes from, as long as it's good. The data Apple used wasn't as good, and people expect better from Apple. And some people sit around waiting for Apple's rare mistakes so they can make a big deal about it. That's what this is all about.
You pushed out a free, great thing...
Stop. Stop. Google Maps is not free. It certainly isn't free to Apple, and ads are a cost to users. Heffernan completely neglects to mention this anywhere in her extended hyperbolic rant, but one of the major reasons Apple couldn't come to terms with Google was because Google wanted more user data. Apple didn't want to give it to them because Apple's customers are people who buy iPhones. Google wanted it because Google's customers are advertisers. That's not crazy Apple fanboi talk, that's Business 101.
...and jammed in your amateur dimestore one, only because you were feeling afraid and grasping, and in so doing you showed a sicko side of yourself (one we all suspect has always been there).
You know Apple isn't actually an individual, right? And certainly not a serial killer or pervert or whatever it is you're implying? Because someone here's got issues and the Macalope's pretty sure it isn't the giant corporation that's being over-anthropomorphized.
It's exciting, sort of, when Titans clash. In the mythical Californian kingdoms of Cupertino and Mountain View, Apple and Google dramatically have at each other. They show no mercy. As arms resound...
Wait, is this whole column a viral ad for Game of Thrones? Because that would make more sense.
Actually, if you're looking for the real reason for Heffernan writing this ridiculously overbearing piece, look no further than the end of the author blurb:
Her new book, Magic and Loss: The Pleasures of the Internet, will be published in early 2013.
Oooh, there's still time for the Macalope to make the section about how mean fanbois can be to people who are very reasonably pointing out how evil Apple is!
Sorry, Virginia, you were saying something about companies making poor choices that reflect badly on themselves just to make money. Please continue.
We should maybe be grateful, then, for incidents like this latest dustup over Apple Maps. It lays that greed and pettiness bare.
Not bare enough, apparently, as we're 500 words in here and you haven't brought up even one of the very real business reasons on the part of both Apple and Google that led to this. There sure have been a lot of dramatic allusions to mythological warfare, though.
The company's aesthetic of purity and perfection--however fascist in nature--...
Paging Mr. Godwin. Mr. Godwin to the white courtesy phone, please.
...is irresistible. But we shouldn't forget that Apple is everywhere--in our pockets, in our brains, all over our credit-card bills--for a reason. And it's not because the company loves us.
No. You're right. It's because they make awesome products.
Yes, Apple is a company and companies don't love. They don't feel empathy or compassion. They don't have hopes and dreams.
Neither, however, are they "narcissistic." They do not "feel afraid" and they most certainly do not have an "id."
So what was your point again?
Oh, right. Book sales.
[Editors' Note: Each week the Macalope skewers the worst of the week's coverage of Apple and other technology companies. In addition to being a mythical beast, the Macalope is not an employee of Macworld. As a result, the Macalope is always free to criticize any media organization. Even ours.]
Tags: Network & Wi-fi | 科技 |
2016-40/3983/en_head.json.gz/9605 | Facebook Hit by Class Action Suit Over 'Beacon'
August 14, 2008 02:47pm EST
Facebook Beacon ruined Christmas and a group of its members are not going to let the social networking site forget it.
Facebook Beacon ruined Christmas and a group of its members are not going to let the social networking site forget it. A group of irked Facebook members filed a class-action lawsuit against Facebook Tuesday that said the company's controversial Beacon advertising program violates several laws, including the Electronic Communications Privacy Act (ECPA), the Computer Fraud and Abuse Act (CFAA), and the Video Privacy Protection Act (VPPA). Beacon, which launched in late 2007, basically tracked the activity of Facebook members on certain partner sites and then posted an item in users' news feeds when they purchased something. Bought movie tickets on Fandango, or new shoes on overstock.com? If you failed to click "no" on a blink-and-you-miss-it notice during checkout, your Facebook newsfeed would soon read "Chloe bought Dark Knight on Fandango." That might not seem too harmful, but what if you bought tickets to "Beverly Hills Chihuahua" on Fandango or a copy of "He's Just Not That Into You" on overstock.com? Suddenly, all 200 of our closest "friends" are privy to that information. That's what happened to people like Sean Lane, a plaintiff in the class-action suit. He bought a ring for his wife as a Christmas present via overstock.com, but the surprise was ruined when Overstock sent the information to Facebook. "Sean Lane bought a 14K White Gold 1/5 ct Diamond Eternity Flower Ring from overstock.com" showed up on his news feed, which was visible to his wife. After a protest campaign spearheaded by MoveOn.org that accused Facebook Beacon of ruining Christmas, Facebook founder Mark Zuckerberg issued a mea culpa and altered Beacon to allow users to block partner Web sites from sending your information to the news feed. That apparently was not enough for the 32 plaintiffs who filed suit on Tuesday with the U.S. District Court for the Northern District of California. "The heart of the conduct complained of involved the communication, transmission, and interception of personally identifying information and personal private data of the class members," according to the suit. Also named in the suit are partner sites Fandango, Hotwire, STA Travel, GameFly, Blockbuster, Overstock, andZappos.com. Beacon violated the ECPA because Facebook intentionally obtained electronic communications between their members and partner sites and disclosed that information to a third party, according to the suit. The program violated the VPPA because video retailers Blockbuster, Fandango, GameFly and overstock.com released information about their customers' video usage without permission. The VPPA was passed in 1988 after the video rental history of Supreme Court nominee Robert Bork was published in a newspaper during his nomination process. Congress found that the videos one chooses to watch is deeply personal and should not be made public without permission. Beacon also violated the CFFA by accessing a computer without permission, according to the lawsuit. The suit also says Beacon was in violation of two California laws the Consumer Legal Remedies Act and the California Computer Crime Law. The lawsuit wants Facebook to hand over any money it earned from Beacon, pay attorneys' fees and other costs, delete any information it obtained about the plaintiffs, never collect personal information about them without permission again, and consent to some sort of independent review to make sure Facebook is following the rules. A Facebook spokeswoman said the company had not yet been served with the lawsuit and could not yet comment on it. Back to top
Intel to Add Wake-on-VoIP Technology to Mobos
Energy Department Invests in Solar Tech | 科技 |
2016-40/3983/en_head.json.gz/9608 | Gates to Testify in Novell Suit
By Joab Jackson, IDG News Service
Ex-Microsoft CEO and current Chairman Bill Gates is testifying Monday in U.S. Federal Court in a seven-year-old, US$1 billion antitrust lawsuit by Novell.Novell first brought the suit against Microsoft in November 2004, claiming Microsoft had purposefully misled Novell prior to the launch of Microsoft's Windows 95 operating system and, as a result, caused Novell to lose market share of its WordPerfect office suite. Novell argued that Microsoft's alleged trickery ran afoul of U.S. antitrust laws, according to court documents. The U.S. District Court of Utah in Salt Lake City is handling the case, which is being presided over by Judge Frederick Motz of the U.S. District Court for the District of Maryland. The case had been moved to the court in Maryland to combine pretrial proceedings with other lawsuits filed against Microsoft. Novell had filed six claims against Microsoft, five of which were dismissed, with the U.S. Court of Appeals in May reversing the dismissal of the sixth claim and moving the case back to the District Court in Utah, with Motz presiding over it there.Novell lawyers contend that Microsoft had invited Novell to work on a version of WordPerfect for the Windows 95 operating system. Gates directed Microsoft's engineers to reject Novell's application to make WordPerfect an official Windows application for the launch, according to Novell. Microsoft was also selling its own office suite, Microsoft Office. After the release of Windows 95, sales of WordPerfect slowed dramatically. Novell later went on to sell WordPerfect to Corel, at a US$1 billion loss. Microsoft lawyers have argued that the version of WordPerfect for Windows 95 was defective and caused the operating system to crash. "We are confident Bill Gates' testimony will help show that Novell's claims have no merit," said Microsoft lead attorney James Jardine, in a statement.While both newer versions of Windows and the WordPerfect suite have subsequently released, the case is still important in that it could help clarify what obligations an owner of a dominant technology has to help potential competitors, said Bruce Schneider, an antitrust partner with the Stroock & Stroock & Lavan law firm. He is not involved in the lawsuit.Attachmate purchased Novell last April, with the help of Microsoft, which acquired a number of patents in the deal. Related:
Microsoft-Novell Antitrust Case Ends in Hung Jury
Email "Gates to Testify in Novell Suit" | 科技 |
2016-40/3983/en_head.json.gz/9638 | Looking Down the Endoscope
BioPhotonicsJan 2014
Gary Boas, News Editor, [email protected] microscopy techniques advance the monitoring of Barrett’s esophagus and other applications.
Barrett’s esophagus – a condition in which the cells lining the esophagus are replaced by cells similar to those found in the stomach – affects anywhere from 1.6 to 6.8 percent of the population. It is not the most common disorder in the gastrointestinal tract, but it is a growing concern, especially as it is associated with increased risk for esophageal adenocarcinoma, a rare but particularly deadly form of cancer.
Tiny microscopes that can see inside single living cells could help gastroenterologists and other specialists diagnose Barrett’s esophagus, a precursor to cancer, and a variety of other conditions.
For this reason, patients diagnosed with Barrett’s esophagus are often placed in a surveillance program to watch for precancerous dysplasia or cancer itself. This involves periodic assessment with white-light endoscopy; for example, the “Seattle protocol” calls for random, four-quadrant biopsies taken at 1- to 2-cm intervals beginning at the gastrointestinal junction. But there are a number of drawbacks to the current approach. Not least: the associated sampling error, and the high cost of the procedure itself and the subsequent histological interpretation.
Researchers are hard at work trying to address these drawbacks. One of the more promising methods they are working with is confocal microscopy, a technique that takes advantage of point illumination and a pinhole to achieve improved resolution and contrast compared with conventional wide-field fluorescence microscopy. Especially with recent advances, the technique shows a great deal of potential for endoscopic applications.
An SECM endoscopic probe ((a) – collimation optics; and (b) – final probe assembly; scale bar = 2 mm). Courtesy of DongKyun Kang et al (2013), ‘Endoscopic probe optics for spectrally encoded confocal microscopy,’ Biomedical Optics Express, Vol. 4, No. 10 (doi: 10.1364/BOE.4.001925).
Invented in 1955 by Harvard researcher Marvin Minsky, confocal microscopy is widely used today by researchers in the biological sciences and elsewhere to obtain images of the 3-D structure of a cell or a tissue sample, for example. Using a pinhole in front of the detector, it rejects the light coming from out-of-focus regions above and below a particular plane and, thus, offers improved optical sectioning of a sample at various depths.
More recently, the technique has been adapted for use with endoscopy to help address ongoing concerns about Barrett’s esophagus and other endoscopic needs. This approach – generally known as confocal laser endomicroscopy – uses the advantages generally found with confocal microscopy to produce high-quality images of the esophagus and other luminal areas of interest. In doing so, it could provide physicians with a means of real-time, in vivo histology.
(a) A catheter-based reflectance-type laser scanning confocal microscope from Mauna Kea Technologies of Paris and Fujinon of Saitama, Japan. (b) Laser confocal microscopy examination for early gastric cancer under endoscopy. Courtesy of Parama Pal (2013), ‘Spectrally encoded confocal microscopy: A new paradigm for diagnosis,’ Journal of the Indian Institute of Science, Vol. 93, Issue 1.
Researchers first described in vivo confocal laser endomicroscopy a little more than a decade ago. In 2003, Masanori Sakashita and colleagues reported a laser-scanning confocal microscopy system for real-time imaging of untreated specimens for examination of colorectal lesions, and further described a probe-based prototype endomicroscope that could be passed through the working channel of an endoscope. Progress came quickly after that, and today there are two commercially available confocal laser endomicroscopy systems: the eCLE from Pentax in Tokyo and the probe-based pCLE from Mauna Kea Technologies in Paris.
A number of studies have sought to assess the efficacy of these systems for endoscopic and other applications, and with promising results. For example, in a recent Digestive Diseases and Sciences paper (doi: 10.1007/s10620-012-2332-z), researchers in Italy found that pCLE exhibited enhanced sensitivity in detecting Barrett’s esophagus, compared with high-definition white-light endoscopy.
A schematic of spectrally encoded confocal microscopy (SECM) probe optics and system (CL = collimation lens, and BS = beamsplitter). SECM is being developed for possible clinical applications. Courtesy of DongKyun Kang et al (2013), ‘Endoscopic probe optics for spectrally encoded confocal microscopy,’ Biomedical Optics Express, Vol. 4, No. 10 (doi: 10.1364/BOE.4.001925).
Still, efforts to improve the techniques are ongoing – e.g., further development of the technique spectrally encoded confocal microscopy (SECM) for possible clinical application. This could fill an important need, say the authors of a recent Biomedical Optics Express paper (doi: 10.1364/BOE.4.001925), researchers from the lab of Guillermo J. Tearney at Harvard Medical School and the Wellman Center for Photomedicine at Massachusetts General Hospital in Boston. Confocal laser endomicroscopy has been successfully demonstrated, but the area of the tissue it can image is still relatively small: less than 0.25 mm2. This could lead to sampling error during biopsy.
Spectrally encoded confocal microscopy could address this limitation by providing large-area imaging of the esophagus, they say.
With SECM, multiple wavelengths of light are delivered to the target site, where each is diffracted by a grating and focused on a particular point on the sample. Because it requires only a stationary optical element – the diffraction grating – the technique can produce images at a very high rate, and for this reason could be more readily adapted for endoscopic applications.
Tearney and colleagues previously described SECM benchtop scanning systems and demonstrated the potential of the technique for large-area imaging of luminal organs. They noted, however, that it still wasn’t ready for clinical use in the organs themselves. First, because the performance of the systems wasn’t yet sufficient for diagnosis. But also because the optics of the SECM probe were too large to be easily incorporated into endoscopic devices. The earlier devices were 15 mm in diameter. To be clinically useful, they would have to be miniaturized by a factor of about three.
Picture of a confocal scanning head integrated onto the distal tip of a conventional Pentax EC-3870CIFK colonoscope. Courtesy of Parama Pal (2013), ‘Spectrally encoded confocal microscopy: A new paradigm for diagnosis,’ Journal of the Indian Institute of Science, Vol. 93, Issue 1.
Hence the Biomedical Optics Express study, published in October. Here, the researchers reported probe optics 5.85 mm in diameter – small enough to fit inside a conventional endoscope. The optics used a custom water-immersion aspheric singlet as the objective lens, reducing the spherical aberrations and specular reflection from the surface of the tissue. This should facilitate improved imaging depth and, in fact, the researchers found they could image tissue in the esophagus down to a depth of 260 μm.
Simple and painless endoscopies?
Though they have made significant strides in developing endomicroscopy methods, researchers are looking to push further still – particularly with respect to probe size, which can have important implications for patient comfort and accessibility.
In a Nature Medicine paper published early last year (doi: 10.1038/nm.3052), Tearney’s team reported a technique called tethered capsule endomicroscopy. Here, an optomechanically engineered “pill” is used to acquire cross-sectional images as it travels through the gastrointestinal tract; the technique relies on light from a rapidly rotating laser, which reflects off the lining of the esophagus and is detected by internal sensors.
The technique is simple and painless, the researchers write. And because it doesn’t involve sedation, there is no need for the specialized medical equipment or staff necessitated by endoscopies otherwise.
The system described in the study uses optical frequency domain imaging to obtain the morphologic information that enables diagnosis of Barrett’s esophagus, for example. But the authors noted that the technique could be extended to other in vivo endomicroscopy technologies, including confocal microscopy.
771 Series Laser Spectrum AnalyzerThe 771 Laser Spectrum Analyzer is a very unique instrument that operates as both a high-accuracy wavelength meter and...promoted by: Bristol Instruments Inc.
Novel Device Adds to Interferometry’s Optical CapabilitiesOPTICSNovel Monomer Enhances Resolution in Live-Cell ImagingMICROSCOPYIntegrated Optics Offering Laser RentalLASERSTime to Speed Up Technology TransferBIOPHOTONICS
AmericasBiomedical Optics Expressbiophotonicsconfocal laser endomicroscopyFeaturesGary BoasGuillermo J. TearneyHarvard Medical SchoollasersMassachusetts General HospitalMauna Kea TechnologiesMicroscopyNature MedicineopticsSECMspectrally encoded confocal microscopyTethered capsule endomicroscopyWellman Center for PhotomedicineEsophageal adenocarcinomaEndomicroscopeeCLE confocal endoscopePentaxpCLE probe-based confocal endomicroscope
Featured Categories24 companiesAutocollimators49 companiesSapphire Crystals95 companiesAspheric LensesFeatured CompaniesII-VI Optical SystemsOptikos CorporationVermont Photonics Technologies Corp. | 科技 |
2016-40/3983/en_head.json.gz/9647 | Biological Sciences Staff Awards & Honors
Wei-Jun Qian Receives DOE Early Career Award
Wei-Jun Qian Congratulations to Pacific Northwest National Laboratory scientist Dr. Wei-Jun Qian for receiving an Early Career Research Award from the Department of Energy. He will develop a suite of quantitative proteomics technologies to gain understanding of the spatial and temporal regulation of cellular functions. Funding for this 5-year research grant is under the American Recovery and Reinvestment Act. Qian will receive $500,000 a year to cover year-round salary plus research expenses. In this project, called "Spatial and Temporal Proteomics for Characterizing Protein Dynamics and Post-Translational Modifications," PNNL researchers will demonstrate the effectiveness of the technology suite on environmental eukaryotes—organisms whose cells contain complex structures inside the membranes—such as Aspergillus niger, a fungus that plays an important role in biofuel production and global carbon cycling. Qian has been at PNNL since 2002. His research involves developing and applying novel mass spectrometry-based quantitative proteomics approaches for accurate and sensitive quantification of protein dynamics in cells, tissues, and biofluids. He is currently focusing on developing more sensitive targeted protein quantification based on selected reaction monitoring (SRM)—mass spectrometry to complement global proteomics discovery. His work has enabled broad applications in various biological systems, and a number of the developed technologies have been applied to study cell signaling and biomarker discovery involved in different diseases such as diabetes. Page 261 of 479 | 科技 |
2016-40/3983/en_head.json.gz/9680 | The United States falls behind Germany in renewable energy investment
GlobalPost April 18, 2011 · 10:01 AM UTC By
David Wroe facebook Share on Facebook
Comment germany-wind-energy-china-2-04-16-2011.jpg
Cranes stand next to a row of wind turbine masts under construction on Aug. 20, 2010 near Linthe, Germany.
Andreas Rentz
BERLIN, Germany and BEIJING, China — In the rough and blustery North Sea, almost 30 miles off Germany’s coast, 12 wind turbines tower over the water, with rotors longer than football fields.
Like most wind turbines, those that make up the Alpha Ventus Offshore Wind Farm have gearboxes that transmit the power of the rotors to much faster-spinning cogs that generate electricity.
The strain from wind turbulence makes these gearboxes the most fragile part of the turbines and a true engineering headache.
Six of the Alpha Ventus gearboxes are made by Renk, a 130-year-old engineering company based in Augsburg, southern Germany. The three decades in which Renk has been involved with wind turbines haven’t always been easy, said Toni Weiss, general manager of Renk’s industrial gear division. But he estimates the firm is now five to seven years ahead of most rivals.
“It hasn’t always worked perfectly … there have been hiccups. But we feel quite at home in this field," Weiss said. "There was discussion in the political arena about whether wind is something that really helps us and if it is really efficient, but … these arguments have disappeared and there is hardly anybody now who says wind turbines are not worthwhile.”
It’s an illustration of how Germany is powering ahead when it comes to investment in renewable energy. Europe’s biggest economy doubled its investment last year to reach $41.2 billion. In doing so, it bumped the United States out of second place as an investor in clean energy technology, according to a recent Pew Charitable Trusts report that shows America is in danger of slipping behind.
China, meanwhile, is even more formidable. It is now the clean energy king, with a record $54.4 billion in investments in 2010 — a 39 percent increase on the previous year. The United States held the top spot until 2008. This year it fell another rung to third, with $34 billion in investments.
The observation among experts in the field is unanimous: The United States offers potential green investors uncertainty while countries like Germany and China have sent clear signals that they support such technology.
“Given uncertainties surrounding key policies and incentives, the United States’ competitive position in the clean energy sector is at risk,” the Pew report concluded.
The Chinese government, by comparison, has issued clearly stated directives from the very top to increase green technology.
“We will actively promote changes in the way energy is produced and used and raise energy efficiency,” Premier Wen Jiabao said in his annual address to the National People’s Congress in March. “We will give impetus to the clean use of traditional energy sources, intensify the construction of smart power grids, and vigorously develop clean energy.”
His words sent a clear signal that China’s clean energy development sector is open for business.
“With aggressive clean energy targets and clear ambition to dominate clean energy manufacturing and power generation, China is rapidly moving ahead of the rest of the world,” the Pew report said.
Or as Deborah Seligsohn, an energy specialist from the World Resources Institute, told the House Subcommittee on Energy and Power earlier this month: “Chinese economic strategists recognize that China was late to the industrial revolution and even late to the IT revolution, but it believes it can be a leader in a green revolution.”
Similarly, the German government’s clear support for renewable energy has made it a safe bet for investors, said Claudia Kemfert, an energy expert from the German Institute for Economic Research in Berlin.
Instrumental to the success has been its “feed-in tariffs,” backed by political consensus, whereby utility companies are compelled to buy clean energy at an inflated price. A family with solar panels on their roof can sell any spare electricity to the grid, while another family down the road pays a slightly higher price for grid electricity to subsidize the clean energy producers.
In about two weeks, Germany will unveil its third offshore wind farm, this one in the Baltic Sea. “Baltic 1,” as it is named, is part of a 3 billion euro ($4.3 billion) investment by utility company EnBW Energie Baden-Wuerttemberg, which aims to generate 20 percent of its energy from renewable sources by 2020.
“It’s one of the biggest things that’s happened for Germany in wind energy,” said Kristina Koebe, who heads a wind energy advocacy group based in Rostock on Germany’s Baltic coast. “We’re hoping that this will be the next step forward in really increasing the share of renewable energy.”
Indeed, it is a sufficiently big deal that Chancellor Angela Merkel will help with the launch on May 2. Although German governments have generally been strong advocates of clean energy, Merkel last year took a gamble and shifted her support to the nuclear industry — a move that backfired badly when the public turned sharply against her in the wake of Japan’s Fukushima distaster.
Merkel, who critics say tends to blow in the wind herself, is eager to bolster her renewable energy credentials. Her government is now scrambling to develop a policy to fast-track Germany’s exit from nuclear power and focus on clean technology — a political imperative that will only strengthen the sector. Environment Minister Norbert Roettgen and Economy Minister Rainer Bruederle last week released a six-point plan to transfer from nuclear to renewables, including a 5 billion euro ($7.2 billion) boost to offshore wind farms.
Quite apart from the environmental, let alone the political, considerations, clean energy advocates argue it is plain business sense.
“The renewable energy sector is very important to Germany — across industry. In order to install a windmill, you need steel production. In total, an additional 360,000 jobs have been created in the renewable energy sector alone. In total, climate protection, including material efficiency, sustainable mobility, green buildings etc., will create more than 1 million new jobs,” Kemfert said.
“Investments will be up to 200 billion euros in the next 10 years. Most of the investment will come from the private sector. This will create jobs and a strong renewable energy sector.”
Giving the businessman’s point of view, Renk’s Toni Weiss said that investment in the United States is not going to grow as fast as other countries until it offers better tax breaks.
“If you look back to the past, you will see you have a direct correlation between tax benefits and investment," he said. "The U.S. doesn’t have many tax benefits for wind energy and they don’t make much investment.”
China, on the other hand, has included “green development” in its five-year plan for economic and social development, making it clear that the country plans to be a global leader in the sector.
Pew predicted that even with aggressive venture capitalists on board, other countries like the United States “will have substantial difficulty keeping pace with China.”
Yang Fuqiang, senior adviser on climate and energy for the Natural Resources Defense Council in China, said the precise language of the five-year plan, requiring improvements in energy efficiency, sent a clear signal to investors.
They’ve established clear green development goals, said Yang.
“It’s a mandate. It’s not a requirement or a target,” he explained. “For investors, this means the situation is secure.”
The situation in China is also urgent, as the country remains one of the world’s leading polluters and the largest producer of climate-change causing carbon emissions.
Yang noted there are further targets within the five-year plan to reduce reliance on coal as a power source from the current 70 percent to 65 percent within the next five years. Coal is still the leading source of power in Germany as well.
“Even though they put a lot of effort into developing renewables, it’s still not been enough to meet the demands of the energy sector,” Yang said.
From in Business, Finance & Economics, Economics, Science, Tech & Environment, Science and Technology.Tagged: Asia China Europe Germany | 科技 |
2016-40/3983/en_head.json.gz/9693 | NOAA 2004-110
Contact: Scott Smullen
NOAA News Releases 2004
NOAA Home Page
NOAA Public Affairs
UNITED STATES ASKS INTERNATIONAL COMMUNITY TO ADOPT COLLABORATIVE PLAN
TO MANAGE SHARKS
NOAA Administrator Outlines Proposal that Mirrors U.S. Fishing Rules
New Orleans — The United States has asked the international community to start managing shark populations in the Atlantic, Mediterranean and Gulf of Mexico. The proposal asks other shark fishing nations to adopt procedures like those followed by U.S. fishermen and resource managers. NOAA Administrator Conrad C. Lautenbacher, Jr. detailed the U.S. proposal on shark management to the International Commission for the Conservation of Atlantic Tunas today during a news conference at the ICCAT meeting in New Orleans. The National Oceanic and Atmospheric Administration is an agency of the U.S. Department of Commerce.
More comprehensive catch data is necessary to support development of effective management for Atlantic sharks throughout their migratory range, and this can only be accomplished at the international level.
“We are serious about managing our fisheries in a sustainable way. The United States is inviting other countries to join us in improving the outlook for Atlantic sharks,” said retired Navy Vice Adm. Conrad C. Lautenbacher, Ph.D., under secretary of commerce for oceans and atmosphere and NOAA administrator. “Healthier shark populations would bolster economic opportunities for all shark harvesting nations, and we believe this action will help get us there.”
The United States has managed domestic Atlantic shark fisheries since 1993. However, populations of many species have continued to decline over the past decade despite a highly regulated U.S. fishery. Domestic shark fisheries are subject to a commercial limited entry program, low annual quotas, recreational catch limits and a prohibition on shark finning – the practice of cutting the fins off the shark and disposing of the carcass. Since sharks are migratory, fishermen from many nations fish on the same stock even though data collection and management efforts are not consistent between nations.
“While we are putting forth our best efforts in the United States to rebuild Atlantic sharks with comprehensive regulations for our domestic fleet, we can only achieve long-term healthy fisheries with cooperation from other shark fishing nations,” Lautenbacher said. “We are asking the international community to join us in our effort to support sustainable shark populations.”
“This proposal is key to moving ICCAT closer to an ecosystem management approach for Atlantic highly migratory fisheries,” said William Hogarth, director of NOAA’s National Marine Fisheries Service and the U.S. government commissioner to ICCAT. “It would result in a win-win situation for sharks, fishermen and the economies of all fishing nations that depend on healthy shark populations.”
As the regional fishery management organization with responsibility for migratory fisheries in the Atlantic Ocean, Mediterranean and Gulf of Mexico, the United States believes ICCAT is best equipped to take the lead in developing binding commitments for international shark management.
The U.S. shark proposal includes the following binding measures that would apply in the Atlantic, Mediterranean, and Gulf of Mexico:
A requirement for nations to report scientific data from all fisheries that catch sharks;
A ban on shark finning;
A requirement for nations to limit the number of vessels that target sharks;
A request for vessels to attempt the release of live sharks that are encountered as bycatch;
A call for scientific research to identify shark nursery areas and expand knowledge of these species’ basic life history; and
A call for nations to develop fishing gear that would reduce bycatch and improve post-release survivability of sharks.
The International Commission for the Conservation of Atlantic Tunas is made up of 39 members, representing 63 countries including the United States. Established in 1969, the commission facilitates international cooperation in research and conservation of fish stocks that are shared by many nations, such as tunas, swordfish, marlins, sailfish and spearfish. The commission’s involvement in shark management has been limited to date. ICCAT will deliberate on this and other proposals during the week and a decision is expected by the conclusion of the meeting on Sunday, November 21, 2004.
NOAA Fisheries is dedicated to providing and preserving the nation’s living marine resources and their habitat through scientific research, management and enforcement. NOAA Fisheries provides effective stewardship of these resources for the benefit of the nation, supporting coastal communities that depend upon them, and helping to provide safe and healthy seafood to consumers and recreational opportunities for the American public.
NOAA is dedicated to enhancing economic security and national safety through the prediction and research of weather and climate-related events and providing environmental stewardship of our nation’s coastal and marine resources.
NOAA: http://www.noaa.gov
NOAA Fisheries: http://www.nmfs.noaa.gov
ICCAT: http://www.iccat.es | 科技 |
2016-40/3983/en_head.json.gz/9730 | Sony A7S – 3D is dead, long live 4K…maybe!
This is a still image grabbed directly from a YouTube video showing off the 4K resolution of the new Sony A7s DSLR camera which was announced yesterday in the US. In case you’re not sure what 4K is, it’s simply a higher resolution than the typical 1080p HD (and 2K) we have now. We’re starting to feel a little sorry for the mainstream camera companies (and tech firms in general), because it’s clear they’re beginning to run out of headroom on new tech, which could cause a problem for their continued ‘growth’.
The first warning shot was the abject failure of 3D, which was supposed to storm the TV and camera market and spark a new frenzied round of consumer purchases. This rejection must have caused quite a few sleepless nights in ConTech Inc head offices, because as we know businesses and profits must grow, and the way tech ensures that is by introducing new stuff.
Which brings us to the now. We currently have probably 3 main contenders for the ‘next great thing‘, including ‘wearables‘, ‘virtual‘ or ‘augmented‘ reality (again) and 4K. Of these, 4K is the probably the best bet for short term success, because it doesn’t need a lot of explaining or hardware upheaval. But it still needs quite a bit of consumer investment, which could still be a bit of a problem.
Lots of reasons were given for the failure of 3D (like headaches, glasses aversion etc) but at the end of the day, the real reason is consumers just didn’t really rate the experience as compelling enough to shell out yet more money to upgrade their kit. This inertia is a real problem when it comes to visual tech, and it’s something which may yet bite the manufacturers.
We recently attended a Sony product showcase at its UK headquarters, and a couple of times during the evening we were approached by Sony execs keen to find out what we thought of the company’s range of new 4K televisions, and we had to be honest and suggest that maybe people wouldn’t be so keen to upgrade their flat screens again so soon, for what looked to be very little content available in 4K format.
That may not have been the story they wanted to hear, but it’s probably likely to play out somewhere like that. The last great consumer tech goldrush was the update to flat screen HD and HD Ready tech (coincidentally also during a World Cup soccer year…do we spot a trend here?), but the transition from clunky old CRT low res was so significant that who could resist the excuse? No-one. Especially when governments started turning off the analog signals anyway.
But this time, as with 3D, we’re back with a situation where there’s almost no television programming available in 4K, the equipment is still very expensive relative to standard HD, and in any case, is 4K resolution really that compelling over 1080p? We would suggest it’s not.
Which takes us back to the new Sony A7s launch, and the launch of other similar products from Nikon et al. One backdoor way to try and encourage consumer uptake of a new format is of course to give them tools to make the content themselves, rather than rely on programming content from the majors. There was a limited attempt to do this with 3D, but the cameras were in the main too exotic for day to day use, so they never took off either. But with 4K, and in particular this kind of mid-range camera launch (the A7s is predicted to be around $1700 retail), the company is probably hoping to start the ball rolling with 4K pro-sumer content in earnest.
Eventually, of course, this resolution will trickle down to low end budget equipment, at which point the sheer momentum of consumer purchases will probably drive adoption, but until that time, and it could be a fairly long time coming, ConTech Inc is going to have to cross fingers and toes that they can survive without the revenues that come with a tech gold rush coming in the door. As we’ve seen recently with Sony’s financial results, this could be a very testing, if not terminally painful period.
[NB Oh..and now the tablet market is reaching commodity/saturation, and the smartphone sector is also similarly almost completely saturated, what next for the tech golden child? Where’s the next technology star coming from? Wearables? Probably not.] | 科技 |
2016-40/3983/en_head.json.gz/9810 | Advertisement Advertisement Strangest Creature of Ancient Earth linked to Modern Animals Tue, 08/19/2014 - 3:08pm Comments by University of Cambridge The spines along its back were thought to be legs, its legs thought to be tentacles along its back, and its head was mistaken for its tail. The animal, known as Hallucigenia due to its otherworldly appearance, had been considered an ‘evolutionary misfit’ as it was not clear how it related to modern animal groups. Researchers from the University of Cambridge have discovered an important link with modern velvet worms, also known as onychophorans, a relatively small group of worm-like animals that live in tropical forests. The results are published in the advance online edition of the journal Nature.
The affinity of Hallucigenia and other contemporary ‘legged worms,’ collectively known as lobopodians, has been very controversial, as a lack of clear characteristics linking them to each other or to modern animals has made it difficult to determine their evolutionary home.
What is more, early interpretations of Hallucigenia, which was first identified in the 1970s, placed it both backwards and upside-down. The spines along the creature’s back were originally thought to be legs, its legs were thought to be tentacles along its back, and its head was mistaken for its tail.
Hallucigenia lived approximately 505 million years ago during the Cambrian Explosion, a period of rapid evolution when most major animal groups first appear in the fossil record. These particular fossils come from the Burgess Shale in Canada’s Rocky Mountains, one of the richest Cambrian fossil deposits in the world.
Looking like something from science fiction, Hallucigenia had a row of rigid spines along its back, and seven or eight pairs of legs ending in claws. The animals were between five and 35 millimeters in length, and lived on the floor of the Cambrian oceans.
A new study of the creature’s claws revealed an organization very close to those of modern velvet worms, where layers of cuticle (a hard substance similar to fingernails) are stacked one inside the other, like Russian nesting dolls. The same nesting structure can also be seen in the jaws of velvet worms, which are no more than legs modified for chewing.
“It’s often thought that modern animal groups arose fully formed during the Cambrian Explosion,” said Dr Martin Smith of the University’s Department of Earth Sciences, the paper’s lead author. “But evolution is a gradual process: today’s complex anatomies emerged step by step, one feature at a time. By deciphering ‘in-between’ fossils like Hallucigenia, we can determine how different animal groups built up their modern body plans.”
While Hallucigenia had been suspected to be an ancestor of velvet worms, definitive characteristics linking them together had been hard to come by, and their claws had never been studied in detail. Through analyzing both the prehistoric and living creatures, the researchers found that claws were the connection joining them together. Cambrian fossils continue to produce new information on origins of complex animals, and the use of high-end imaging techniques and data on living organisms further allows researchers to untangle the enigmatic evolution of earliest creatures.
“An exciting outcome of this study is that it turns our current understanding of the evolutionary tree of arthropods — the group including spiders, insects and crustaceans — upside down,” said Dr Javier Ortega-Hernandez, the paper’s co-author. “Most gene-based studies suggest that arthropods and velvet worms are closely related to each other; however, our results indicate that arthropods are actually closer to water bears, or tardigrades, a group of hardy microscopic animals best known for being able to survive the vacuum of space and sub-zero temperatures — leaving velvet worms as distant cousins.”
“The peculiar claws of Hallucigenia are a smoking gun that solve a long and heated debate in evolutionary biology, and may even help to decipher other problematic Cambrian critters,” said Dr Smith. Related Reads LED Bulbs Can Both Light a Room, Provide a Communications Link
Opioids Linked with Deaths Other Than Overdoses, Study Says
CDC: 3 Babies with Zika-Linked Birth Defects Born in U.S.
New Link Found Between Diabetes, Alzheimer's Disease | 科技 |
2016-40/3983/en_head.json.gz/9819 | Backing Biotech Innovation
BIOTECH: Two Firms Capture Huge Share of Region’s 2nd Quarter VC Dollars
By Julie Gallant
Also in the second quarter, Astute Medical of San Diego, a developer of novel, biomarker-based medical diagnostics, announced the completion of a $40.4 million Series C financing led by MPM Capital and including new investor Kaiser Permanente Ventures.
Founded in 2007 by two former members of the management team at Biosite Inc., Chris Hibberd and Paul McPherson, Astute is a diagnostics company that is focusing on the study and validation of biomarkers that can be commercialized as novel tests to serve unmet diagnostic needs. Essentially, the company says it studies new and known biomarkers — individually and in combination — seeking to discover utilities that can be applied to diagnosis or risk assessment. Areas of applications include kidney injury, sepsis, abdominal pain and acute coronary syndrome.
Expanding R&D
Astute Medical is currently preparing to commercialize its first test in Europe. Not yet approved for sale in the U.S., the test is intended to aid in the risk assessment of kidney injury in critically ill patients. The company intends to use the proceeds of its investment to begin commercializing its first product, as well as to advance and expand its research, development and validation of biomarker-based laboratory tests.
“Astute Medical has made rapid progress in its development of biomarker-based diagnostics to address major unmet medical needs,” said Jim Scopa, managing director of MPM Capital. “We are excited to fund the commercialization of Astute’s first product as well as the continued development of the pipeline.” Another leader in the quest for VC funding is Celladon Corp., a Del Mar biotech founded in 2004. Celladon’s Head of Corporate Development Fredrik Wiklund said the company altogether raised $53 million in two tranches, with $43 million announced in February and an additional $10 million in the same round announced in May. “It’s all considered the same tranche, it’s just that two investors came in at a later date under the same terms,” Wiklund said. “The proceeds will go toward the clinical trial and toward the manufacturing of our product.”
The key driver for the company is Mydicar, which is in a Phase 2b clinical trial and targets a key enzyme deficiency in advanced heart failure.
Heart Failure Cases on the Rise
More than 670,000 new cases of heart failure will be diagnosed in 2010 and that number increases yearly, according to Celladon, which states that the economic burden of heart failure in the U.S. in 2010 was estimated at $39.2 billion. Prev
Venture Capital Is Still Flowing In 2nd Quarter
Sonrgy Names O’Callaghan Chairman and CEO
Sonrgy Appoints CEO
Sangart Raises $50M in Series G Funding
Celladon Enlists Swiss Firm for Manufacturing Help
Celladon Corp. Plans $86.3M IPO
Venture Capital Still Moving Under Caution Flag in Biomed, Life Sciences
New President, CFO at Celladon Says Co. Ripe to Expand Staff | 科技 |
2016-40/3983/en_head.json.gz/9841 | Sources: Gearbox Shelves Aliens Title Amid Layoffs (Updated)
Update 2: Pitchford has reiterated to Shacknews that production on Aliens is still ongoing, though he did not go into further detail on the status of the project, only stating that to say it has "halted" is an "inaccurate characterization."
Update: Gearbox president Randy Pitchford has responded to the rumors in a comment post, saying that Aliens: Colonial Marines has not been cancelled.
"Aliens isn't canned," said Pitchford. "We've made some transformative changes and yes, that's meant some talent changes, but that's not the real story. The true relevance of the story will actually be irrelevant until we release our next game, at which time I hope there will be a lot of interest in what we've done that can produce such results."
Original story: Production on Gearbox Software's shooter Aliens: Colonial Marines (PC, 360, PS3), a four player co-op shooter based on the classic film franchise, has been halted, according to multiple sources. Several sources have also told Shacknews that Gearbox today laid off between 15 and 25 employees, with one putting the number at 26. These sources independently claimed that the poor performance of Brothers in Arms: Hell's Highway and the stoppage of work on the Aliens title were contributing factors to the layoffs. nope
Though specific sales data on Hell's Highway is unavailable, the title debuted on the NPD group's weekly PC sales chart at ninth position, before quickly falling off the following week. Hell's Highway did not place on the US console sales chart for either September or October--the console versions of Hell's Highway were released on September 23--though publisher Ubisoft partly attributed its strong second quarter sales to the game. Aliens: Colonial Marines was announced in February. Billed as an authentic Aliens shooter, Colonial Marines was backed by a strong pre-production team, including writers from the hit TV show Battlestar Galactica. The Sega-published title was previously scheduled for release in 2009.
Gearbox has expanded to develop a wide range of projects recently, shipping Brothers in Arms: Hell's Highway (PC, 360, PS3) and Samba de Amigo (Wii) earlier this year while simultaneously working on its original shooter Borderlands (PC, 360, PS3). Chatty
TheSty
Wow, I'm thoroughly impressed with your originality. I'd suggest a thicker skin if you're going to be dishing it out. In... Wsps
BIA: HH was not a great game, compared to the other two games in the series it was a huge disappointment. I have been -... jfugly
Alright, go get yourself together a team of people and make a game that sells millions of units. It's really simple! M... Visit Chatty to Join The Conversation | 科技 |
2016-40/3983/en_head.json.gz/9865 | NOOK Windows app axed as Microsoft devs own eReader
Barnes & Noble has ceased development on its NOOK app for Windows 8 and Windows Phone, and will instead simply provide content for Microsoft’s own “consumer reader” platform, a regulatory filing has confirmed. Details of Microsoft’s new ereader app have not been released, but B&N has agreed to help transition users of the existing NOOK Windows 8 software onto Microsoft’s own – in addition to flesh out the contents of its virtual store shelves – when it’s available. The NOOK app for Windows 8 has been relatively short-lived, with B&N agreeing to develop it as part of a partnership with Microsoft back in October 2012. Then, Microsoft pushed $300m into a new subsidiary, NOOK MEDIA LLC, to fund development of the software; at the time, it was felt that an ereader app was vital for Windows 8 to be taken seriously, particularly as a tablet OS to rival iPad and Android.
The ereader software was eventually released in February 2013.
Since then, of course, Barnes & Noble’s ebook fortunes have slumped, and the company has slashed its engineering team and ceased internal development of NOOK Tablet hardware. Instead, it plans to license the brand name to third-party manufacturers.
Signs that the NOOK app was on its last legs weren’t in short supply, meanwhile. B&N axed all versions pre-Windows 8 in June last year, in addition to discontinuing the Mac version. Users were directed to the browser interface instead.
Now, B&N has squeezed out of its Windows 8 obligations too, though will still get some cash from Microsoft as part of a modified revenue sharing agreement. Existing users of the NOOK app on Windows will be pushed into the new Microsoft-branded app, with the latter opening automatically instead.
The release date for Microsoft’s “Consumer Reader Platform” is redacted, for the public version of the document anyway.
Microsoft joins Barnes & Noble for NOOK Media LLC subsidiary
New Barnes & Noble Nook Windows 8 app now available
Barnes & Noble NOOK app for PC and Mac discontinued
Barnes & Noble CEO resigns: NOOK lead replaced
Barnes & Noble Nook business slides significantly during holiday season
Tags Barnes & NobleMicrosoftnookWindowsWindows 8windows phone Must Read Bits & Bytes | 科技 |
2016-40/3983/en_head.json.gz/9867 | The Big Business of Neo-Humanity
SlateBillion to OneWhere to find the world's next billionaires.Nov. 14 2013 11:44 AM
The Business of Living Forever
The melding of human and machine intelligence might make us immortal—and might make a bundle for an ingenious few.
By Jathan Sadowski
Illustration by Rob Donnelly At the tender age of 32, Dmitry Itskov is not yet a billionaire, although a lot of respected news outlets think otherwise. He is a millionaire many times over—a survivor of the dot-com bubble who made his fortune building a media empire in Russia. Like many people who become extremely rich very quickly, he has decided to invest some of his money in innovative, forward-looking endeavors. But his idea is more ambitious than most: radical life extension.
In 2011, Itskov founded the 2045 Initiative, which is named for the year when he intends to complete the project’s ultimate goal: to outwit and outrun mortality itself. His “avatar” project is a four-stage process, beginning with the development of androids directed by brain-computer interfacing—mind-controlled robots, in other words. It would culminate in a computer model of a person’s brain and consciousness, which could be uploaded into a machine for posterity. An eternal problem, solved.
To achieve cybernetic immortality and turn what he calls his “science mega-project” into a reality, Itskov’s 2045 Initiative is funding labs around the world; Itskov is both investing his own money and raising external capital, building support among entities ranging from Ivy League universities to large corporations to even the Dalai Lama.
Advertisement Even if Itskov doesn’t reach his final goal of radical life extension via avatars, the amount of attention he’s bringing and money he’s investing in neurotech research have many people excited. And Itskov is just one in an increasingly crowded field. One of the big brains involved in the 2045 Initiative is Ray Kurzweil, the famed inventor and futurist who popularized the concept of the singularity. Kurzweil is also, along with Stanford University computer science professor Andrew Ng, working with Google to develop the much-discussed artificial-intelligence system called Google Brain. The project is based on Ng’s field of research, known as deep learning, which melds computer science and engineering to construct machines that process data in ways similar to the workings of the human brain.
“There is a sense from many places that whoever figures out how the brain computes will come up with the next generation of computers,” Thomas Insel, director of the National Institute of Mental Health, told Wired recently. That helps explain why other behemoth companies—including Microsoft, IBM, Apple, and Chinese search giant Baidu—have been racing to set up their own deep-learning research arms. At Stanford, meanwhile, bioengineers are making great strides toward reverse engineering the brain through neuromorphic technologies: “systems of nondigital chips that function as much as possible like networks of real neurons,” as a recent Nature article explained. Ultra-efficient neuromorphic hardware, Nature added, could be used in anything “from smartphones and robots to artificial eyes and ears.” The practical applications are endless—and potentially the makings of big business.
While those techies are working on artificial brains, other scientists and engineers are using technology to tweak, repair, and upgrade the actual human brain, and their intrepid research has similar business potential. Neurotech—whether to render humans immortal or not—is “one of the most dramatic growth areas of the 21st century,” according to the market watchers at Neurotech Business Report, which predicts neurotech’s market size to double between 2012 and 2016. Last month, the publication hosted the Neurotech Leaders Forum, which one attendee described to me as “like a dating service for researchers and venture capitalists.” This year’s Aspen Brain Forum focused on accelerating translational neurotech: moving research from lab, to prototype, to application, and finally to market. In its first five years, more than 100 groups have joined the Neurotechnology Industry Organization, an association that host conferences, advocates for neurotech investment, and lobbies for policies that support growth and innovation.
MIT is betting that the next Larry Page or Sergey Brin will emerge from the university’s burgeoning neurotech hive.
Neurotech is also enjoying a boost from public coffers. Last year in the journal Neuron, a group of leading researchers proposed a large-scale effort they called the Brain Activity Map Project. They predicted that neurotech “will provide economic benefits, potentially leading to the emergence of entirely new industries and commercial ventures.” This project has since been subsumed into the recently announced BRAIN (Brain Research Through Advancing Innovative Neurotechnologies) Initiative, which will, in its first year alone, grant $110 million in federal funding plus another $122 million from private organizations toward R&D aimed at creating new, applicable neurotech. Add to that $1.3 billion from the European Union’s Human Brain Project and you’re looking at a solid base.
Ed Boyden and Joost Bonsen, two faculty members in the famed MIT Media Lab, anticipate that neurotech is poised to take off much in the same way that biotechnology did. As Bonsen recently told MIT News, neurotech analogs to Biogen and Genzyme “are being born or blossoming now.” Boyden and Bonsen’s MIT course, “Neurotechnology Ventures,” is intended to “seed the Silicon Valley of neurotech,” as Boyden put it. They’re betting that the next billionaire tech visionary—the next Larry Page or Sergey Brin—will emerge not from the Stanford computer science department, but from MIT’s burgeoning neurotech hive.
Not that Stanford is resting on its laurels. Within five years, Stanford bioengineer Kwabena Boahen told Nature, “We envision building fully autonomous robots that interact with their environments in a meaningful way, and operate in real time while their brains consume as much electricity as a cellphone.” These are the kinds of robots that populate Dmitry Itskov’s dreams. The hope is that innovations in deep learning and neurotech might eventually meet, melding human and computer intelligence in ways that will allow us to live forever—or at least longer. If that happens, the great scientific and business minds that monetize deep learning and neurotech will have more than enough in their coffers to fund an eternal retirement.
Jathan Sadowski is a graduate student studying applied ethics and the human and social dimensions of science and technology at Arizona State University. | 科技 |
2016-40/3983/en_head.json.gz/9899 | NASA TV - Education
NASA TV - Digital Learning Network
NASA TV - Social
NASA TV - Media
NASA TV - International Space Station
NASA TV - Headquarters
NASA TV - Jet Propulsion Laboratory
NASA TV - Jet Propulsion Laboratory (2)
NASA TV - Kennedy Space Center
NASA TV - Goddard Space Flight Center
NASA TV - Marshall Space Flight Center
NASA TV - Wallops Flight Facility
NASA Audio Channel
Asteroids and Comets
Gemini Observatory Captures "Perfection" With Image From New High-Tech Instrument
From: Particle Physics and Astronomy Research Council Posted: Tuesday, October 2, 2001 A remarkable first light image has been obtained with a new state-of the-art instrument at the Gemini North Telescope on Hawaii's Mauna Kea. The image of the large galaxy in Pisces called NGC 628 (or Messier 74) has been called the "Perfect Spiral Galaxy" due to its nearly ideal form, which is clearly revealed in this new image. Named GMOS or the Gemini Multi-Object Spectrograph, the instrument that took the image is primarily designed for spectroscopic studies where several hundred simultaneous spectra are required, such as when observing star and galaxy clusters. However, as the dramatic new image demonstrates, GMOS also has the ability to focus beautiful astronomical images on its huge array of almost 24 million ultra-sensitive pixels. When combined with Gemini's 8.1-metre main mirror, the GMOS first-light image of this spiral galaxy leaves no doubt about the instrument's potential on Gemini. The instrument's first light image of the galaxy that is number 74 in Charles Messier's catalogue of celestial show-pieces (a.k.a. M-74), clearly shows many features of the galaxy such as star clusters, gas clouds and dust lanes. Some of these objects are similar to what we can see in our own Milky Way with the naked eye or small telescope on a clear moonless night. "To be able to routinely see fine details like this in a galaxy more than 30 million light years away is quite remarkable and helps to give some perspective of what our own galaxy might look like if there were another Gemini sized telescope looking back at us!" says Gemini North's Associate Director Dr. Jean-Rene Roy. It is estimated that M-74 is home to about 100 billion stars making it slightly smaller than our Milky Way."This instrument took world-class data on its first night, performing perfectly, right out of the box, or at least the 24 crates that brought the 2-ton instrument to Hawaii from Canada and the UK," said Gemini Observatory Director Dr. Matt Mountain. UK Scientists played a key role in designing and building the GMOS instrument during its seven year construction period.Dr. Mountain added, "This is a considerable testament to the professionalism, planning and teamwork of the multi-national group of astronomers and engineers from the UK's Astronomy Technology Centre, Durham University and the Hertzberg Institute of Astrophysics in Canada who were able to build this instrument and commission it with our staff so successfully here on Mauna Kea. This type of multi-disciplined, multi-national effort represents a new and powerful way to do world-class observational astrophysics," continued Dr. Mountain. The instrument was built as a joint partnership between Gemini, Canada and the UK at a cost of over �3 million. Separately, the U.S. National Optical Astronomy Observatory provided the detector subsystem and related software.Dr Adrian Russell, Director of the UK Astronomy Technology Centre in Edinburgh said: "I am proud of the achievements of everyone who has worked so hard to make this project such a success. The fact that it was commissioned so smoothly is a testament that hard work. The Gemini Partnership now has an immensely powerful scientific tool with which to study the Universe. The best is yet to come." It is anticipated that GMOS will begin full scientific operations later this year when astronomers from the Gemini partnership, in which the UK has a 25% share, begin using the instrument for a wide variety of scientific studies. "It is extremely exciting to see the wide range of cutting-edge observations already scheduled for GMOS over the next few months," said Gemini Astronomer Dr. Inger Jorgensen who led the instrument's commissioning effort. Dr. Jorgensen also said, "I'm most interested in the planned observations of distant galaxy clusters where Gemini is able to work like a time machine and look back in time to study a much younger universe than we see around us today." The Dr. Isobel Hook from the UK Gemini Support Group who helped obtain the instrument's first multi-object spectroscopic data said, "The first spectra produced by GMOS were brilliant! When you combine GMOS with Gemini's resolution and great light gathering power we are able to study details that would otherwise be lost. One area where I think this instrument will excel is in the study of supernova, or exploding stars in very distant galaxies. Once we can obtain spectra from these stars we will be able to better understand the apparent acceleration of the universe." Professor Roger Davies from Durham University is the leader of the UK's GMOS team. He obtained some early scientific demonstration data that will soon be released to astronomers. For this observation, the light from individual galaxies in a distant, massive swarm of galaxies was collected. According to Davies, "We were able to observe these galaxies as easily as if they were our close neighbours. Now we'll use this superb spectroscopic data to determine their mass, size and composition and look back in time to see how they have changed through cosmic history. The combination of Gemini's tremendous light collecting power and the technology of GMOS allowed us to obtain phenomenal data only a few days after the instrument was installed on the telescope. I can see that this instrument is going to keep astronomers very busy and extremely happy for a long time!"
ContactsPeter Barratt - PPARC Press OfficeTel: 01793 442025Email: [email protected] Adrian Russell - Director, UKATCTel: 0131 668 8100Email: [email protected] Roger Davies - University of DurhamTel: 0191 374 2163Email: [email protected] Paterson - UK Gemini Project ManagerTel: 0131 668 8100Email: [email protected] image of the spiral galaxy can be downloaded from the PPARC website www.pparc.ac.uk. Photo credit: Gemini Observatory - GMOS team. Alternatively, a high resolution version can be downloaded from www.gemini.edu/project/announcements/press/2001-2.htmlFurther information and images of GMOS can be found on the following websites:-www.gemini.edu/gallery/instrument/gmos/www.roe.ac.uk/atc/projects/gmos/http://aig-www.dur.ac.uk/fix/projects/projects_index.htmlNotes to EditorsThe Gemini Observatory is an international collaboration that has built two identical 8-meter telescopes. The telescopes are located at Mauna Kea, Hawaii (Gemini North) and Cerro Pach�n in central Chile (Gemini South), and hence provide full coverage of both hemispheres of the sky. Both telescopes incorporate new technologies that allow large, relatively thin mirrors under active control to collect and focus both optical and infrared radiation from space. Gemini North has begun science operations and Gemini South is scheduled to begin scientific operations in late 2001.The Gemini Observatory provides the astronomical communities in each partner country with state-of-the-art astronomical facilities that allocate observing time in proportion to each country's contribution. In addition to financial support, each country also contributes significant scientific and technical resources. The national research agencies that form the Gemini partnership include: the US National Science Foundation (NSF), the UK Particle Physics and Astronomy Research Council (PPARC), the Canadian National Research Council (NRC), the Chilean Comisi�n Nacional de Investigaci�n Cientifica y Tecnol�gica (CONICYT), the Australian Research Council (ARC), the Argentinean Consejo Nacional de Investigaciones Cient�ficas y T�cnicas (CONICET) and the Brazilian Conselho Nacional de Desenvolvimento Cient�fico e Tecnol�gico (CNPq). The Observatory is managed by the Association of Universities for Research in Astronomy, Inc. (AURA) under a cooperative agreement with the NSF. The NSF also serves as the executive agency for the international partnership.The Particle Physics and Astronomy Research Council (PPARC) is the UK's strategic science investment agency. It funds research, education and public understanding in four broad areas of science - particle physics, astronomy, cosmology and space science. PPARC is government funded and provides research grants and studentships to scientists in British universities, gives researchers access to world-class facilities and funds the UK membership of international bodies such as the European Organisation for Nuclear Research, CERN, and the European Space Agency. It also contributes money for the UK telescopes overseas on La Palma, Hawaii, Australia and in Chile, the UK Astronomy Technology Centre at the Royal Observatory, Edinburgh and the MERLIN/VLBI National Facility.PPARC's Public Understanding of Science and Technology Awards Scheme provides funding to both small local projects and national initiatives aimed at improving public understanding of its areas of science. Contact
Peter Barratt
PPARC
[email protected] // end //
More news releases and status reports or top stories.
SpaceRef Network
© 2016 SpaceRef Interactive Inc. All right are reserved | 科技 |
2016-40/3983/en_head.json.gz/10111 | Place an Ad Business | World/National Business Project aims to track big city carbon footprints
LOS ANGELES � Every time Los Angeles exhales, odd-looking gadgets anchored in the mountains above the city trace the invisible puffs of carbon dioxide, methane and other greenhouse gases that waft skyward. Halfway around the globe, similar contraptions atop the Eiffel Tower and elsewhere around Paris keep a pulse on emissions from smokestacks and automobile tailpipes. And there is talk of outfitting Sao Paulo, Brazil, with sensors that sniff the byproducts of burning fossil fuels.It�s part of a budding effort to track the carbon footprints of megacities, urban hubs with over 10 million people that are increasingly responsible for human-caused global warming.For years, carbon dioxide and other greenhouse pollutants have been closely monitored around the planet by stations on the ground and in space. Last week, worldwide levels of carbon dioxide reached 400 parts per million at a Hawaii station that sets the global benchmark � a concentration not seen in millions of years. Now, some scientists are eyeing large cities � with LA and Paris as guinea pigs � and aiming to observe emissions in the atmosphere as a first step toward independently verifying whether local � and often lofty � climate goals are being met. For the past year, a high-tech sensor poking out from a converted shipping container has stared at the Los Angeles basin from its mile-high perch on Mount Wilson, a peak in the San Gabriel Mountains that�s home to a famous observatory and communication towers.Like a satellite gazing down on Earth, it scans more than two dozen points from the inland desert to the coast. Every few minutes, it rumbles to life as it automatically sweeps the horizon, measuring sunlight bouncing off the surface for the unique fingerprint of carbon dioxide and other heat-trapping gases. In a storage room next door, commercially available instruments that typically monitor air quality double as climate sniffers. And in nearby Pasadena, a refurbished vintage solar telescope on the roof of a laboratory on the California Institute of Technology campus captures sunlight and sends it down a shaft 60 feet below where a prism-like instrument separates out carbon dioxide molecules. On a recent April afternoon atop Mount Wilson, a brown haze hung over the city, the accumulation of dust and smoke particles in the atmosphere. �There are some days where we can see 150 miles way out to the Channel Islands and there are some days where we have trouble even seeing what�s down here in the foreground,� said Stanley Sander, a senior research scientist at the NASA Jet Propulsion Laboratory.What Sander and others are after are the mostly invisible greenhouse gases spewing from factories and freeways below.There are plans to expand the network. This summer, technicians will install commercial gas analyzers at a dozen more rooftops around the greater LA region. Scientists also plan to drive around the city in a Prius outfitted with a portable emission-measuring device and fly a research aircraft to pinpoint methane hotspots from the sky (A well-known natural source is the La Brea Tar Pits in the heart of LA where underground bacteria burp bubbles of methane gas to the surface.) Six years ago, elected officials vowed to reduce emissions to 35 percent below 1990 levels by 2030 by shifting to renewable energy and weaning the city�s dependence on out-of-state coal-fired plants, greening the twin port complex and airports and retrofitting city buildings. It�s impractical to blanket the city with instruments so scientists rely on a handful of sensors and use computer models to work backward to determine the sources of the emissions and whether they�re increasing. They won�t be able to zero in on an offending street or a landfill, but they hope to be able to tell whether switching buses from diesel to alternative fuel has made a dent. Project manager Riley Duren of JPL said it�ll take several years of monitoring to know whether LA is on track to reach its goal. Scientists not involved with the project say it makes sense to dissect emissions on a city level to confirm whether certain strategies to curb greenhouse gases are working. But they�re divided about the focus. Allen Robinson, an air quality expert at Carnegie Mellon University, said he prefers more attention paid to measuring a city�s methane emissions since scientists know less about them than carbon dioxide release. Nearly 58 percent of California�s carbon dioxide emissions in 2010 came from gasoline-powered vehicles, according to the U.S. Energy Department�s latest figures.In much of the country, coal �usually as fuel for electric power � is a major source of carbon dioxide pollution. But in California, it�s responsible for a tad more than 1 percent of the state�s carbon dioxide emissions. Natural gas, considered a cleaner fuel, spews one third of the state�s carbon dioxide.Overall, California in 2010 released about 408 million tons of carbon dioxide into the air. The state�s carbon dioxide pollution is greater than all but 20 countries and is just ahead of Spain�s emissions. In 2010, California put nearly 11 tons of carbon dioxide into the air for every person, which is lower than the national average of 20 tons per person.Gregg Marland, an Appalachian State University professor who has tracked worldwide emissions for the Energy Department, said there�s value in learning about a city�s emissions and testing techniques. �I don�t think we need to try this in many places, but we have to try some to see what works and what we can do,� he said.Launching the monitoring project came with the usual growing pains. In Paris, a carbon sniffer originally tucked away in the Eiffel Tower�s observation deck had to be moved to a higher floor that�s off-limits to the public after tourists� exhaling interfered with the data. So far, $3 million have been spent on the U.S. effort with funding from federal, state and private groups. The French, backed by different sponsors, have spent roughly the same.Scientists hope to strengthen their ground measurements with upcoming launches of Earth satellites designed to track carbon dioxide from orbit. The field experiment does not yet extend to China, by far the world�s biggest carbon dioxide polluter. But it�s a start, experts say. With the focus on megacities, others have worked to decipher the carbon footprint of smaller places like Indianapolis, Boston and Oakland, where University of California, Berkeley researchers have taken a different tack and blanketed school rooftops with relatively inexpensive sensors. �We are at a very early stage of knowing the best strategy, and need to learn the pros and cons of different approaches,� said Inez Fung, a professor of atmospheric science at Berkeley who has no role in the various projects. | 科技 |
2016-40/3983/en_head.json.gz/10122 | Transaction Processing Performance Council Announces Annual International Technology Conference on Performance Evaluation and Benchmarking (TPCTC 2012)
SAN FRANCISCO --(Business Wire)-- The Transaction Processing Performance Council (TPC) today announced a call-for-papers for its fourth annual Technology Conference on Performance Evaluation and Benchmarking (TPCTC 2012). The conference will be collocated with the 38th International Conference on Very Large Data Bases (VLDB 2012) on August 27, 2012 in Istanbul, Turkey.
The TPC is a co-sponsor of VLDB 2012, and conference registration information is available online at http://www.vldb2012.org/. Selected papers will be presented during the conference, published in Springer's Lecture Notes on Computer Science (both in print and electronic formats), and may be considered for either future benchmark developments or enhancements to existing benchmark standards.
The deadline for abstract submission is May 25, 2012. Researchers and industry experts are encouraged to submit ideas and methodologies in performance evaluation, measurement and characterization in areas including, but not limited to: big data, cloud computing, business intelligence, energy and space efficiency, hardware and software innovations and lessons learned in practice using TPC and other benchmark workloads. Further information is available online at http://www.tpc.org/tpctc2012/.
Notably, Michael J. Carey, Donald Bren Professor of Computer and Information Sciences at the University of California, Irvine, is confirmed as the TPCTC keynote speaker. "TPC benchmarks have been hugely influential in driving technical progress in the database field for many years. The TPC continues to bring new benchmark standards to the industry, and the TPCTC is an invaluable forum for presenting and discussing new ideas for future benchmark standards," said Dr. Carey. "Ideas submitted at past conferences have culminated in the creation of benchmark development committees for several potential standards, and with so much buzz today related to cloud computin, NoSQL databases, and 'Big Data', this year's conference has the potential to be particularly exciting."
"The TPCTC is an unparalleled venue in which innovative ideas are presented in the context of developing next generation benchmark standards," said Raghunath Nambiar, general chair of the conference and a performance strategist at Cisco (News - Alert). "Industry experts have voiced overwhelming support for past conferences, and we anticipate a number of compelling new ideas this year in emerging areas like Big Data management and analytics."
Organizations that are interested in influencing the TPC benchmarking development process are encouraged to become members. Additional information is available online at http://www.tpc.org/information/about/join.asp.
TPC-DS
The TPC is also announcing a new decision support benchmark (TPC-DS), which has been carefully designed to measure query throughput and data integration performance for a given hardware configuration, operating system, and DBMS configuration under a controlled, complex, multi-user decision support workload.
TPC-DS models the decision support functions of a retail product supplier. The supporting schema contains vital business information such as customer, order and product data. Its workload is designed to test the upward boundaries of hardware system performance in several areas including CPU utilization, memory utilization, I/O subsystem utilization and the ability of the operating system and database software to perform various complex functions important to decision support systems (DSS). These areas include examining large volumes of data, computing and executing the best execution plan for queries with a high degree of complexity, efficiently scheduling a large number of user sessions, giving answers to critical business questions and periodically synchronizing the data warehouse by means of a data integration process with OLTP sources.
"TPC-DS assesses a broad range of system topologies and implementation methodologies in a technically rigorous and directly comparable, vendor-neutral manner," said Meikel Poess, Chairman of the TPC-DS committee. "It is the first benchmark specification to integrate key workloads of modern decision support systems including ad-hoc queries, reporting queries, OLAP queries, data mining queries and data integration from OLTP systems."
Additional information is available online: http://www.tpc.org/tpcds/
About VLDB
VLDB is a premier annual international forum for data management and database researchers, vendors, practitioners, application developers, and users. The conference will feature research talks, tutorials, demonstrations, and workshops. It will cover current issues in data management, database and information systems research. Data management and databases remain among the main technological cornerstones of emerging applications of the twenty-first century.
VLDB 2012 will take place at the Hilton Hotel in Istanbul, Turkey on August 27 - 31.
About the TPC
The TPC is a non-profit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable TPC performance data to the industry. The TPC currently has 18 full members: AMD, Bull, Cisco, Dell, Fujitsu, Huawei (News - Alert), HP, Hitachi, IBM, Intel, Microsoft, NEC, Oracle, Red Hat, Sybase, Teradata, Unisys and VMware; and five associate members: Ideas International, ITOM International Co, San Diego Supercomputer Center, Telecommunications Technology Association and the University of Coimbra. Further information is available at http://www.tpc.org.
TPCTC 2012 Contacts
Raghunath Nambiar, General Chair, [email protected]
Meikel Poess, General Chair, [email protected] | 科技 |
2016-40/3983/en_head.json.gz/10147 | Home / News / Games News / Sony Announces UK Pricing For PS3 Games Sony Announces UK Pricing For PS3 Games
With the launch of Europe’s much delayed and now crippled PlayStation 3 little more than three weeks away Sony desperately needed to feed us a bit of good news... and now it has.Official pricing has finally been announced for the troubled console’s games and many will be surprised to learn they will retail for just £39.99, a figure which matches titles on the Nintendo Wii and even undercuts Xbox 360 releases by a whole ten quid.
On top of this we also know that content downloaded from the PlayStation Network will begin from a mere €0.99 (which will typically cover game expansions like additional maps/tracks) while older downloadable titles (which include Tekken 5) range from €2.99 to €9.99.
As you would expect Sony was highly chipper about this and sent forth David Reeves, the President and CEO of SCEE, to enjoy a rare deserved moment of chest beating: “Not only will it be completely free to register on the PlayStation Network with no subscription fees and access to many free demos, but with these competitive prices for additional content we are able to offer the consumer both top quality games on Blu-ray discs and a whole range of downloadable content.”
So far so good, but then he lost the plot: ”With over 30 first and third party disc and network games available at launch, we are confident that this will be one of the most successful launches of all time." We’ll see David, we’ll see...
But hey, at least they’re not fifty quid!
Link:PlayStation 3 Europe | 科技 |
2016-40/3983/en_head.json.gz/10149 | Home > content > Gamers turn cities into a battleground
Gamers turn cities into a battleground
Submitted by srlinuxx on Monday 13th of June 2005 03:02:29 AM Filed under Gaming [1]
Matt has been abandoned on Tower Bridge, London, with nothing except his clothes and a mobile phone. A woman dressed in black walks past, and Matt receives a text message to follow her. He doesn't know who she is, or where she is going. All he knows is that he must follow her if he is to find Uncle Roy.
Matt is playing Uncle Roy All Around You, where for one day he is the main character in an elaborate experimental fantasy game played out across the streets of London. He also happens to be a pioneer of a new social phenomenon, urban gaming. If you thought the computer games of the 21st century are only ever played by couch potatoes addicted to the new generation of Xbox, Nintendo or PlayStation consoles, you'd be mistaken. For urban gamers are harnessing the power of global positioning systems (GPS), high-resolution screens and cameras and the latest mobile phones to play games across our towns and cities, where they become spies, vampire slayers, celebrities and even Pac-Man.
Urban gaming started in the 1990s with the advent of "geocaching", where GPS is used to pinpoint exact locations. Players buried "treasure" then posted the longitude and latitude coordinates online, allowing others to hunt for the prize. Such treasure hunts have become extremely popular and are played by hundreds of thousands of people worldwide, with prizes buried in ever more exotic locations, even underwater.
"The limitations of physical space makes playing the game exciting," says Michele Chang, a technology ethnographer with Intel in Portland, Oregon. There is also a social element, says Chang. Last year, as a social experiment to see how people behave with real-world games, she created Digital Street Game, which ran for six months in New York. The aim was to acquire territory by performing stunts dictated by the game at public locations around the city, such as playing hopscotch at a crossroads while holding a hot-dog. "People are more reserved than you would imagine," says Chang. Some players took to performing their stunt on rooftops to avoid being seen, she says, while others relished being ostentatious - like players of Pac-Manhattan, in which New Yorkers dress up as the video game icon Pac-Man and flee other gamers dressed up as ghosts.
While many of the first real-world games involved using separate GPS receivers and handheld computers, mobile phones and PDAs that integrate such technology are catching up. "I think we are going to see more and more games that blend with our real lives." Uncle Roy All Around You is one such game, developed by interactive technology researcher Steve Benford at the University of Nottingham, UK. Another phone-based game is a variant of the classic arcade game Tron. Two or more players, who may never have met, speed through a city leaving a virtual trail behind them that is plotted on their mobile phone screens. There is one rule: you can't cross your own trail or that of the other player.
Soon you may even be able to play games using phones without GPS hardware. One being played by 30,000 people in Sweden, Russia, Ireland, Finland and now China is called BotFighters. Produced by It's Alive, BotFighters is a variant on Dungeons and Dragons role-playing games.
The company has even bigger plans, developing a game that exploits a digital camera already built into the console. Virtual creatures live at specific GPS coordinates, and when a player views the location through the camera they will see the real world with a three-dimensional animated digital creature laid over the scene.
Game designers face the challenge of how to preclude "cyber-stalking", and protect the safety of the public and players, especially children, who might wander into unsafe situations or places. But ultimately, urban games may encourage a generation of console geeks to get off the sofa. "I have literally run around a park interacting with virtual creatures," says Hilton. "I'm going to have to get seriously fit if I want to develop one game I'm working on."
[2] http://www.newscientist.com/article.ns?id=dn7498 | 科技 |
2016-40/3983/en_head.json.gz/10180 | $12.3M center aims to ramp up design of advanced materials
By Nicole Casal Moore
It takes between 10 and 20 years to develop a new material — an advanced metal alloy, for example, that can be used in lighter cars, trucks and airplanes. That's too long, says John Allison, professor of materials science and engineering. With an $11 million, five-year grant from the U.S. Department of Energy (DoE), Allison is leading a project that aims to drastically shorten that time. The funding comes from the Materials Genome Initiative, President Obama's plan to double the speed with which American scientists and engineers discover, develop, and manufacture new materials. In addition to the DoE grant, the university is providing $1.3 million toward the effort. The grants establish a DoE Software Innovation Center called the PRedictive Integrated Structural Materials Science Center, or PRISMS. "Materials have been a defining technology for humans since the beginning — the Stone Age, the Bronze Age, and now we have the Silicon Age," Allison says. "Going forward, we need new materials to solve enormous engineering challenges around critical issues such as global warming. We don't have as much time as we used to."
Researchers at the center will build a set of integrated, open-source computational tools that materials researchers in academia and industry can use to simulate how proposed materials might behave in the real world. The software tools will provide a radical change from the traditional trial-and-error approach, Allison says. Trial and error managed to double the strength of aluminum alloys since the Wright brothers' time, but it took 80 years. "PRISMS will give us a quantitative means to figure out which materials knob we should be turning," Allison says. "If I were studying fatigue of metals, for example, and I wanted to understand how to improve that property, I'd want to quantify or simulate how a certain microstructural feature might affect it."
More than 160,000 engineering materials exist today, and most are mixes of between six and 10 different elements. These materials can have different properties at various scales, from that of the atom, up to the microstructure, to the end product, whether that's a laptop battery, solar cell or car door. It's challenging for the field to predict how each different combination of elements will behave at each of these levels, and that's why Allison says materials science hasn't kept pace with industry needs. "We're starting to fall behind because the product development and manufacturing fields now have computational tools to design new aircraft components and manufacturing approaches in days, but for materials it still takes much longer. We're losing opportunities to really advance new products," he says. "The country and the companies that figure this out will have a major competitive advantage."
Allison says the materials field is at a tipping point. "The ability to integrate knowledge across length scales and different technical domains has been a major challenge but the needs for this are now very clear. We believe that the integrated computational tools our team will be developing will serve as a scientific core for a transformational new approach to materials development."
The PRISMS team of 11 faculty members from across the College of Engineering and the School of Information will demonstrate their new approach on magnesium, the lightest-weight metal, which has applications in the auto, aerospace and electronics industries. In addition to Allison, faculty members involved in the PRISMS Center are:
• Samantha Daly, assistant professor of mechanical engineering.
• Krishna Garikipati, professor of mechanical engineering.
• Vikram Gavini, assistant professor of mechanical engineering.
• Margaret Hedstrom, professor and associate dean for academic programs at the School of Information.
• H.V. Jagadish, the Bernard A. Galler Collegiate Professor of Electrical Engineering and Computer Science.
• J. Wayne Jones, professor of materials science and engineering.
• Emmanuelle Marquis, assistant professor of materials science and engineering.
• Veera Sundararaghavan, assistant professor of aerospace engineering.
• Katsuyo Thornton, associate professor of materials science and engineering.
• Anton Van der Ven, associate professor of materials science and engineering. | 科技 |
2016-40/3983/en_head.json.gz/10231 | Google Appoints Shirley M. Tilghman, Ph.D., to its Board of Directors
WEBWIRE – Wednesday, October 5, 2005
MOUNTAIN VIEW, CA - October 5, 2005 - Google Inc. (NASDAQ: GOOG) today announced that Shirley M. Tilghman, Princeton University�s President and Professor of Molecular Biology, was unanimously elected to join Google�s Board of Directors. Dr. Tilghman is a world-renowned scholar, an exceptional teacher, and is respected worldwide for her pioneering research and advocacy of women in science.
�It�s an honor to welcome a woman of Dr. Tilghman�s reputation to our board,� said Eric Schmidt, Chairman and CEO of Google. �Google is a company born out of university research, so we look forward to tapping into her extraordinary talents as an accomplished academic, and as a champion of discovery.�
Dr. Tilghman made her mark during postdoctoral studies at the National Institutes of Health, where she participated in cloning the first mammalian gene. She continued to make scientific breakthroughs in the field of mammalian genetics as a member of the Institute for Cancer Research in Philadelphia and an adjunct associate professor of human genetics and biochemistry and biophysics at the University of Pennsylvania. Tilghman was a member of the National Research Council�s committee that set the blueprint for the U.S. effort in the Human Genome Project and went on to become one of the founding members of the National Advisory Council of the Human Genome Project Initiative for the National Institutes of Health.
A native of Canada, Tilghman received her Honors B.Sc. in chemistry from Queen�s University in Kingston, Ontario, and earned her Ph.D. in biochemistry from Temple University in Philadelphia. She joined Princeton in 1986 as the Howard A. Prior Professor of the Life Sciences, and took the role as founding director of Princeton�s multi-disciplinary Lewis-Sigler Institute for Integrative Genomics in 1998. After serving on Princeton University�s faculty for 15 years, Dr. Tilghman became the University�s president in June 2001.
An advocate for encouraging women in science, she received national attention for a report on �Trends in the Careers of Life Scientists� that was issued in 1998 by a committee she chaired for the National Research Council. A recipient of numerous awards and honorary degrees, she is a member of the Royal Society of London, the U.S. National Academy of Sciences and the American Philosophical Society.
About Google Inc.
Google�s innovative search technologies connect millions of people around the world with information every day. Founded in 1998 by Stanford Ph.D. students Larry Page and Sergey Brin, Google today is a top web property in all major global markets. Google�s targeted advertising program provides businesses of all sizes with measurable results, while enhancing the overall web experience for users. Google is headquartered in Silicon Valley with offices throughout the Americas, Europe and Asia. For more information, visit www.google.com.
Google is a registered trademark of Google Inc. All other company and product names may
be trademarks of the respective companies with which they are associated.
Lynn Fox
Electronic / Internet Commerce
Multimedia / Online / Internet | 科技 |
2016-40/3983/en_head.json.gz/10308 | Articles: Storage
(87) SanDisk Extreme IV Compact Flash Card and New Card-Readers. Page 2
[07/30/2007 06:52 PM | Storage]by Aleksey Meyev SanDisk Extreme IV flash card is guaranteed to deliver superb performance compared to the previous series cards in virtually every parameter. But you should be aware that the card can only give you its best when you insert it into an appropriate reader. Find out more about the best SanDisk card-readers in our new article.
Compact Flash FormatThe Compact Flash format was developed by SanDisk in 1994. October 11, 1995, the Compact Flash Association was established by twelve companies (Apple Computer, Canon, Eastman Kodak, Hewlett-Packard, LG Semicon, Matsushita, Motorola, NEC, Polaroid, SanDisk, Seagate and Seiko Epson) to standardize and promote it. The first revision of the specification was ratified then. The goal of the new standard was to preserve all the advantages of ATA Flash cards while eliminating their main drawback, which was their too big size. Compact Flash got a PCMCIA-compliant 50-pin parallel interface and dimensions of 36 x 43 x 3.3 millimeters. You can install a Compact Flash card into a PCMCIA slot via a simple adapter. All CF cards support two voltages, 3.3V and 5V.The CD card is based on flash memory of the EEPROM type (Electronically Erased Programmable Read-Only Memory) whose characteristic features are:Lack of moving partsNon-volatile (no additional power is necessary to store the data)High reliability of the chip, tolerance to magnetic fieldsAnother feature of this memory type is that memory cells are accessed in blocks. A block of several cells is read or written to at once even if only some of the block data are actually required. If data has to be written into a partially free block, the existing information is read and merged with the new information, and then the resulting block is written in full instead of the old one. This method has a rather poor random access time, but also high sequential read speeds.Memory cells are getting destroyed from being rewritten and their service life is about 100 thousand rewrite cycles. The controllers of modern cards feature special tracking algorithms for distributing data evenly among all the card cells, which makes the service life of the whole card longer. The controllers also keep track of the status of particular cells. When a cell is destroyed, the entire block with that cell is marked as destroyed and is substituted for a reserve block. Each card has reserve blocks for that purpose. And when there are no more reserve blocks, the card capacity begins to shrink as more blocks get destroyed from use. It is a very unlikely event for your flash card to get destroyed fully. More probably, its capacity won’t suit you anymore and you will replace it with a larger card before the described situation.Interesting to note, early Compact Flash cards used to come in capacities of 2, 4, 10 and 15MB and the standard described a maximum data-transfer rate of 8MB/s.The standard was evolving steadily towards larger capacities, higher speeds, and broader functionality. It quickly became the most widespread among flash card formats.In March 1998 the Type II card specification was added (and ordinary cards began to be called Type I). The new cards were thicker (5 millimeters as opposed to 3.3 millimeters) to accommodate more storage space. Type I cards were compatible with Type II connectors and such connectors came to be used everywhere although the cards themselves were getting less and less popular.In the fall of the same year the CF+ specs (CF 1.4) were written to describe input/output functions for devices designed in Compact Flash format. Various fax-modems, Ethernet adapters, barcode readers and, eventually, TV-tuners, Bluetooth adapters, GPS receivers and Wi-Fi became available in Compact Flash format, making it even more popular. In the spring of 2001 the standard is expanded by adding security features (Secure Compact Flash).The version 2.0 specification was released on the 16th of June, 2003. The maximum allowable interface speed was increased from 8MB/s to 16MB/s (the then-available chips could only yield 5-7MB/s, so the authors of the specification tried to make some reserve of speed for the future). The support for a DMA interface with UltraDMA-33 mode was introduced for new devices. Considering the growth of card capacities, the makers of new devices were advised to provide support for FAT32 besides FAT12 and FAT16. This should have eliminated the possible problem with 2GB and lager cards.These future-proof measures proved to be exhausted in just a year and a half. On the 6th of January, 2005, the version 3.0 specification appeared. It increased the speed to 66MB/s while maintaining compatibility with earlier released cards and added support for UltraDMA-66. The maximum storage capacity was increased to 8GB.And finally, in March 2007 the version 4.1 specification was released to increase the speed to 133MB/s and add UltraDMA-133 mode. You can download the specification from the CFA website after passing a simple registration procedure.Today, there are thousands of various devices using Compact Flash cards. The maximum storage capacity is 16 gigabytes and Lexar, one of the leaders in this field, has announced a card with a speed of 300x (or 45MB/s).Now let’s get back to the devices we’ll test today. Table of contents:
Compact Flash Format
SanDisk Extreme IV
SanDisk USB Card-Reader
SanDisk FireWire Card-Reader
Sequential Read & Write Patterns
Random Read & Write Patterns
Windows Vista Ready Boost Patterns
Performance in FC-Test 1.0 | 科技 |
2016-40/3983/en_head.json.gz/10326 | View the results at Google, or enable JavaScript to view them here. You are hereHome » AtmosNews » Perspective
Texas–Oklahoma drought: What next for the Southern Plains? Bob Henson | 20 September 2011 • It finally rained in Pecos. On 14 September, the West Texas town received a modest but welcome 0.13 inches, plus another 0.27” in the following three days (about 16 millimeters total) . Normally that wouldn’t be big news—except the last time Pecos had gotten any substantial rain was on 23 September 2010. During the intervening year, the town scraped by with a mere 0.03” (1 mm). Fire devoured much of the forest in Bastrop State Park just east of Bastrop, Texas, on 9 September. (Photo © 2011 K. West.)
The beneficial rains of the last week—an inch or more across much of Texas and Oklahoma—are a blessing for locals, but they’re only a few drops in the bucket next to the record drought and heat that’s ravaged the region since last autumn.
Wichita Falls, TX, notched its 100th day of 100°F weather on 13 September. It saw its hottest month ever in August, when the average temperature (including both highs and overnight lows) hit a blistering 93.4°F (34.1°C). The city’s Hotter’N’ Hell Hundred bicycle ride—one of the nation’s most popular 100-mile circuits—lived up to its name on 27 August, as more than 10,000 pedalers endured temperatures that soared to 109°F (42.8°C).
Based on preliminary data from NOAA’s National Climatic Data Center, Oklahoma’s average temperature for July—88.9°F (31.6°C)—set the highest monthly average ever recorded for any state. Texas scored the national record for the highest summer average (June through August) with 86.8°F. Precipitation for the state of Texas from October 2010 through August 2011 averaged only 10.06” (39 mm), which is 2.34” below the previous 11-month record for the state.
The drought’s toll on people, crops, livestock, and the landscape is obvious in a stunning photo blog recently assembled by the Austin American-Statesman, and in the equally powerful comments from readers. One wrote: “I live in Omaha, Texas, and every afternoon I drive home and count more limbs dropping off trees.”
Not far from Austin, a catastrophic fire consumed more than 1,000 homes in Bastrop County earlier this month. I bicycled along quiet, tree-lined roads in Bastrop State Park on a November afternoon five years ago, amid a patch of evergreen forest similar to the Piney Woods of East Texas that was dubbed the Lost Pines. Now many of those pines are truly lost, with more than 95% of the park’s acreage burned.
No relief in year two?
The nonstop 100-degree days are gone, and the recent spritzes of moisture have raised spirits across much of Texas and Oklahoma. Yet as bad as the past year has been, things could get even worse. La Niña events hike the odds of drought across the southern United States, and a strong La Niña was in place during the last year. Now NOAA has issued a new La Niña Advisory, heralding the return of cooler-than-normal surface waters to the eastern tropical Pacific for a second winter.
“Exceptional” drought—the most dire ranking—covered most of Texas and Oklahoma for the 13 September edition of the U.S. Drought Monitor. (Image courtesy National Drought Mitigation Center.)
Not all global computer models agree that another year of La Niña conditions is in store. In fact, several say that neutral conditions will predominate, as shown in this summary from the Australian Bureau of Meteorology and this graphic from the International Research Institute for Climate and Society. The IRI is putting the odds of La Niña and neutral conditions at about 50% each. However, cool-ocean anomalies have been intensifying over the last several weeks. Since May, NOAA’s Climate Forecast System model has sent progressively stronger signals pointing toward La Niña, and other models are now reaching similar conclusions.
Can the drought get worse? “Yes,” says Mark Svoboda of the National Drought Mitigation Center, which is based at the University of Nebraska–Lincoln.
Svoboda is a frequent lead author for the U.S. Drought Monitor, an invaluable weekly summary of conditions across all 50 states. The site's lead graphic tells the tale quickly: in the most recent update (reproduced below), a brick-red blob covers most of Texas and Oklahoma. This color denotes “exceptional” drought, the most severe of five categories.
Starving cattle, storm-blocking heat
Agriculture feels the pinch most directly when a drought sets in, notes Svoboda. Rangelands are now in tatters across the region, with many ranchers selling off cattle. Even if rains were to resume in earnest, Svoboda says, “those lands might not even get back to 75% of capacity for two or three years.” Also in jeopardy is winter wheat, which typically paints the countryside bright green in February while other plants lie dormant.
Yet another threat comes from “blue northers,” cold fronts that sweep from Canada into Texas with very high wind but little rain or snow. If it doesn’t rain much this fall, such wind could pose an ever-increasing threat for wildland fires on par with the Bastrop disaster.
One reason why the Southern Plains can easily swing from dry to wet conditions is the strong east-west contrast in precipitation. Texas, in particular, is a high-contrast zone: the El Paso area averages less than 10 inches of moisture per year, while the far east typically gets more than 55 inches, as shown in the 1971–2000 averages above. (Image courtesy Texas Water Development Board.)
Landfalling hurricanes and tropical storms can sometimes pull the Southern Plains out of drought, but this year’s systems haven’t done the trick—they've struck either too far south or too far east. In part that’s because of the drought and heat itself: the associated domes of hot air beneath upper-level highs help steer moisture-laden systems away.
“These droughts can really feed on themselves,” says Svoboda. “Unless you get something like a tropical storm to bust that high pressure dome, what you’ve got is a source region for very hot, dry conditions and persistent drought.” The summer’s upper high has weakened, but he remains concerned that the dryness could work its way northeastward over the next few months: the odds for drought in much of Kansas and Nebraska more than double as a La Niña matures.
“My 'drought radar' really perks up in the Midwest when I see a La Niña winter, especially if it’s the second one in a row,” Svoboda explains.
Harbinger of a changing climate?
The unprecedented strength of both heat and drought across Texas has echoes in climate periods of the past as well as projections of the future. Researchers have long warned that the U.S. Southwest is prone to megadroughts—periods of dryness lasting 20 years or more that are far more intense than anything observed in the last century.
An overview by a team from Lamont-Doherty Earth Observatory chronicles a series of megadroughts that struck the region between 900 and 1400 A.D. Persistent cooling in the Pacific’s La Niña source region may have been a factor, they note. And megadroughts may have been a feature of southwestern climate hundreds of thousands of years ago, according to analyses of lake sediments led by Peter Fawcett (University of New Mexico) that appeared in Nature this year.
There’s also the risk that human-produced greenhouse gases will push the Southern Plains toward hotter and drier conditions on average. Warmer temperatures alone—expected to climb several degrees Fahrenheit this century, in line with global trends—will help dry the soil and intensify any droughts that do occur.
Though climate models haven’t yet pinned down future changes in regional precipitation across North America, their rough consensus is for subpolar moistening and subtropical drying—in other words, increased precipitation toward Canada and reduced precipitation toward the southern United States and Mexico.
Has global warming already played a role in the 2010–11 Texas drought? That’s the question tackled by Texas state climatologist John Nielsen-Gammon (Texas A&M University) in a 9 September post on his Climate Abyss blog. He told me:
I decided to put this together because a lot of people seemed to be pointing at the Texas drought and saying, "See? Climate change!" I wanted to look at the different ways climate change might be affecting drought and see how much of what was happening fit with ongoing trends or future projections. I was also afraid some members of the public might be thinking about the drought and heat as two independent rarities rather than two interrelated aspects of the same event.
The post includes a concise summary of climate model projections, as well as Nielsen-Gammon’s own attempt to break down the excessive heat (vividly shown in the graphic below by causal factors. In a nutshell, he accounts for most of this summer’s 5.4°F (3.0°C) anomaly in Texas temperature as follows:
4.0°F: feedback from the drought0.9°F: long-term (human-induced) climate change0.3°F: the Atlantic Multidecadal Oscillation (AMO)
“Note that there’s uncertainty with all those numbers, and I have only made the crudest attempts at quantifying the uncertainty,” writes Nielsen-Gammon. His calculations should thus be seen as a starting point rather than the last word.
When Texas temperatures and rainfall are plotted against each other for the last 117 years, the strong ties between heat and drought become evident—as well as the unprecedented nature of 2011 (red dot at upper left). The curves shown are best-fit second-order polynomial (black) and best-fit logarithm (red). (Graphic courtesy John Nielsen-Gammon, Climate Abyss.)
As for the causes of the drought itself, Nielsen-Gammon believes the La Niña influence was enhanced by a positive AMO, which is strongly correlated with multiyear drought across the Southern Plains (see PDF on this topic) and a negative Pacific Decadal Oscillation, which fosters La Niña-like atmospheric patterns. The AMO/PDO juxtaposition could continue for years, Nieson-Gammon notes. He believes the feedback between dry soil and hot temperature took an increasingly larger role this summer.
Given the uncertainty among climate models about future precipitation, and given the fact that Texas has become slightly wetter on average over the last century, Nielson-Gammon isn’t pinning the remarkable lack of precipitation over the last year on climate change. However, he says that the drought’s overall severity was made worse by the contribution of human-induced climate change to high temperatures and enhanced evaporation.
A lively discussion has already ensued on Climate Abyss, and peer-reviewed work will no doubt shed more light on the attribution question. Only a few studies to date have built robust arguments on what percentage of a given event can be attributed to climate change. Whatever the mix of causes, the extreme nature of the 2011–12 drought and heat reminds us that in the Southern Plains, as elsewhere, past climate performance does not guarantee future results. *Media & nonprofit use of images: Except where otherwise indicated, media and nonprofit use permitted with credit as indicated above and compliance with UCAR's terms of use. Find more images in the NCAR|UCAR Multimedia & Image Gallery.The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under sponsorship by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. For Journalists
NCAR & UCAR at a Glance
Visuals & Multimedia
Backgrounders on Climate, Weather
News Release Archive by Year
National Center for Atmospheric Research | University Corporation for Atmospheric Research @UCAR | http://www2.ucar.edu/atmosnews/perspective/5396/texas-oklahoma-drought-what-next-southern-plains Follow Us | 科技 |
2016-40/3983/en_head.json.gz/10330 | Media and Technology ForumMedia and Technology Forum Annual Meeting
FacultyCase Studies
Research SeminarsSeminar Topics
CoursesCourse Map
External Electives
StudentsStudent Clubs
Careers in Media and Technology
EventsStudent EventsMedia and Entertainment Conference
Mentoring Breakfast Series
Speaker Events
West Coast Trips
The Media and Technology Program
The Media and Technology Program » News » New Research Debunks ‘Showrooming’ Myths: Shows Brick-and-Mortar Retailers How to Keep Smartphone-Wielding Shoppers Spending In-Store New Research Debunks ‘Showrooming’ Myths: Shows Brick-and-Mortar Retailers How to Keep Smartphone-Wielding Shoppers Spending In-Store
list-style-none
Columbia Business School and Aimia researchers survey more than 3000 consumers to understand how they use smartphones in store aisles
September 12, 2013 NEW YORK—With brick–and–mortar–retail stores continuing to struggle with the rise of “showrooming” consumers—those visiting a store to see a product but then purchasing it later online—groundbreaking research from Columbia Business School and global loyalty experts Aimia shows retailers concrete steps they can take to entice consumers armed with mobile devices to make purchases inside their store walls. The report, Showrooming and the Rise of the Mobile-Assisted Shopper identifies five distinct segments of mobile–assisted shoppers and uncovers clear opportunities for retailers to engage and retain these tech–savvy customers.“Retailers know that they are operating in a new world, where the shopper in your store with a smartphone has access to every competing outlet and offer,” said David Rogers, a co–author of the study and professor at Columbia Business School. “But retailers are not powerless. To survive, it is critically important that retailers understand the real impact of smartphones on shopper behavior, which will allow them to shape a retail experience that gives mobile consumers a reason to buy in a brick–and–mortar store.”“Retailers don’t have to resort to automatic price–matching,” states Rick Ferguson, a co–author of the study and the vice president of knowledge development at Aimia. “M–Shoppers show a strong willingness to join loyalty programs in exchange for rewards, and this gives retailers the chance to build long–term relationships with them.”Some of the key takeaways of the report include:Showrooming isn’t just for the Millennial Generation: Contrary to popular belief, 74 of M–Shoppers are older than 29 years old.Mobile devices can actually improve the chances of an in–store purchase: More than 50 of M–Shoppers are more likely to purchase a product in–store when their mobile device helps them find online reviews, information, or trusted advice.Price isn’t always the most important factor: Although “price checking” is the number one action of M–Shoppers, convenience, urgency, and immediacy are the top three reasons why M–Shoppers will buy in–store even if they find the same product cheaper online.Loyalty programs are worth more than just their points: 48 of M–Shoppers say that being a member of a store’s loyalty program makes them more likely to purchase products in–store, despite equal or cheaper prices online.The researchers looked at the attitudes, shopping patterns, and motivations of 3000 leading–edge consumers in the US, UK, and Canada to better understand how mobile devices are impacting their in–store shopping habits; identifying those shoppers most likely to showroom; and outlining actions retailers can take—such as loyalty programs, price matching, free shipping, and mobile payments—to encourage consumers to open their wallets in–store. The results paint a clear picture of today’s mobile assisted shoppers—or M–Shopper—and debunks commonly held assumptions many brick–and–mortar retailers make about retail showroomers. Luring Back the Five Segments of Mobile–Assisted ShoppersThe research found that there are five distinct types of mobile–assisted shoppers and uncovered clear opportunities for retailers to engage and retain the business of these tech–savvy customers.The Exploiters: It would be easy for retailers to write off the Exploiters as a lost cause. But the best opportunity for retailers to win their business may simply be to improve the store’s website. When Exploiters see a product on the shelf and pull out their mobile device, they are nearly as likely to search for it on the store’s own website as on a competitor’s site (69 vs. 77).The Savvys: Although they currently represent only 13 of mobile–assisted shoppers, Savvys are the ripest target for retailers to try out new offers and experiences in the mobile space. They are simultaneously more digitally–savvy, more willing to sign up for loyalty programs, and more likely to be motivated by a range of retailer offers and rewards.The Price–Sensitives: Price–Sensitives use their devices in stores periodically, but not as consistently as the other segments. Often, the right in–store experience will be enough to earn the Price–Sensitives’ business. Their mobile devices may be with them, but still remain in their pockets and purses.The Traditionalists: These shoppers are committed to purchasing in–store, making them the least threatening segment of mobile–assisted shoppers for retailers. They are open to interacting with retail stores on their mobile devices, whether by website, store app, or even scanning a QR code. But, they are currently using their devices mostly to consult on purchases with friends and family.The Experience–Seekers: As the largest of all the segments, Experience–Seekers point to the opportunity for retailers to engage customers on their mobile devices in non–financial ways, with opportunities to comment, provide ratings, etc. And they demonstrate why retailers still need to invest in providing a unique and compelling in-store experience.“Our findings debunk many of the common assumptions about the threat of showrooming and who is doing it,” said Matthew Quint, a co–author of the study and director of Columbia Business School’s Center on Global Brand Leadership. “Many shoppers with smartphones care about more than just the lowest price on every item. In fact, while roughly 25 of M–Shoppers may require a discount to motivate in–store purchases, a clear majority can be enticed to purchase in–store through information assistance, engagement strategies, and strong loyalty rewards programs.”### About Columbia Business SchoolLed by Dean Glenn Hubbard, the Russell L. Carson Professor of Finance and Economics, Columbia Business School is at the forefront of management education for a rapidly changing world. The school’s cutting–edge curriculum bridges academic theory and practice, equipping students with an entrepreneurial mindset to recognize and capture opportunity in a competitive business environment. Beyond academic rigor and teaching excellence, the school offers programs that are designed to give students practical experience making decisions in real–world environments. The school offers MBA and Executive MBA (EMBA) degrees, as well as non–degree Executive Education programs. For more information, visit www.gsb.columbia.edu.About AimiaAimia Inc. (“Aimia” or the “Corporation”) is a global leader in loyalty management. Employing more than 4,000 people in over 20 countries worldwide, Aimia offers clients, partners and members proven expertise in launching and managing coalition loyalty programs, delivering proprietary loyalty services, creating value through loyalty analytics and driving innovation in the emerging digital, mobile and social communications spaces.Aimia owns and operates Aeroplan, Canada’s premier coalition loyalty program, Nectar, the United Kingdom’s largest coalition loyalty program and Nectar Italia. In addition, Aimia owns stakes in Air Miles Middle East, Mexico’s leading coalition loyalty program Club Premier, Brazil’s Prismah Fidelidade, and i2c, a joint venture with Sainsbury’s offering insight and data analytics services in the UK to retailers and suppliers. Aimia also holds a minority position in Cardlytics, a US–based private company operating in transaction–driven marketing for electronic banking. Aimia is listed on the Toronto Stock Exchange (TSX: AIM). For more information, visit us at www.aimia.com. TopicsMarketing Media and Technology Related Content
four-block
Neil Gandal of Tel Aviv University will present media research.
9:00am at Google. RSVP Required.
Media and Technology Forum Lunch
Columbia Law Professor Tim Wu will discuss his new book, The Attention Merchants: The Epic Struggle to Get Inside Our Heads
12:00pm. (Invitation only)
NYC Media Seminar Series: Linking economists working on media topics in the greater New York area by providing a regular forum for discussion.
Find the next seminar → four-block
Read Jonathan Knee's latest Book Entry on NYT DealB%k:
‘Feminist Fight Club’ Takes On Workplace Sexism
The last 20 years has seen a reduction in many of the most blatant forms of workplace discrimination against women.... Make a Gift | 科技 |
2016-40/3983/en_head.json.gz/10362 | Report: Oceans’ demise near irreversible – Moving toward the tipping point
BY LES BLUMENTHAL, McClatchy Newspapers
WASHINGTON — A sobering new report warns that oceans face a “fundamental and irreversible ecological transformation” not seen in millions of years as greenhouse gases and climate change already have affected temperature, acidity, sea and oxygen levels, the food chain and possibly major currents that could alter global weather.
The report, in Science magazine, doesn’t break a lot of new ground, but it brings together dozens of studies that collectively paint a dismal picture of deteriorating ocean health.
“This is further evidence we are well on our way to the next great extinction event,” said Ove Hoegh-Guldberg, the director of the Global Change Institute at the University of Queensland in Australia and a co-author of the report.
John Bruno, an associate professor of marine sciences at the University of North Carolina at Chapel Hill and the report’s other co-author, isn’t quite as alarmist, but he’s equally concerned.
“We are becoming increasingly certain that the world’s marine ecosystems are reaching tipping points,” Bruno said, adding, “We really have no power or model to foresee” the effect.
The oceans, which cover 71 percent of the Earth’s surface, have played a dominant role in regulating the planet’s climate. However, even as the understanding of what’s happening to terrestrial ecosystems as a result of climate change has grown, studies of marine ecosystems have lagged, the report says. The oceans are acting as a heat sink for rising temperatures and have absorbed about one-third of the carbon dioxide produced by human activities.
Among other things, the report notes:
* The average temperature of the upper level of the oceans has increased more than 1 degree Fahrenheit over the past 100 years, and global ocean surface temperatures in January were the second-warmest ever recorded for that month.
* Though the increase in acidity is slight, it represents a “major departure” from the geochemical conditions that have existed in the oceans for hundreds of thousands if not millions of years.
* Nutrient-poor “ocean deserts” in the Pacific and Atlantic oceans grew by 15 percent, or roughly 2.5 million square miles, from 1998 to 2006.
* Oxygen concentrations have been dropping off the Northwest U.S. coast and the coast of southern Africa, where dead zones are appearing regularly. There is paleontological evidence that declining oxygen levels in the oceans played a major role in at least four or five mass extinctions.
* Since the early 1980s, the production of phytoplankton, a crucial creature at the lower end of the food chain, has declined 6 percent, with 70 percent of the decline found in the northern parts of the oceans. Scientists also have found that phytoplankton are becoming smaller.
Volcanic activity and large meteorite strikes in the past have “resulted in hostile conditions that have increased extinction rates and driven ecosystem collapse,” the report says. “There is now overwhelming evidence human activities are driving rapid changes on a scale similar to these past events.
“Many of these changes are already occurring within the world’s oceans with serious consequences likely over the coming years.”
One of the consequences could be a disruption of major ocean currents, particularly those flowing north and south, circulating warm water from the equator to polar regions and cold water from the poles back to the equator. Higher temperatures in polar regions and a decrease in the salinity of surface water because of melting ice sheets could interrupt such circulation, the report says.
The change in currents could further affect such climate phenomena as the El Niño-Southern Oscillation, the Pacific Decadal Oscillation and the North Atlantic Oscillation. Scientists just now are starting to understand how these phenomena affect global weather patterns.
“Although our comprehension of how this variability will change over the coming decades remains uncertain, the steady increase in heat content in the ocean and atmosphere are likely to have profound influences on the strength, direction and behavior of the world’s major current systems,” the report says.
Kelp forests such as those off the Northwest U.S. coast, along with corals, sea grasses, mangroves and salt marsh grasses, are threatened by the changes the oceans are undergoing, the report says. All of them provide habitat for thousands of species.
The polar bear isn’t the only polar mammal that faces an escalating risk of extinction, the report says; penguin and seal populations also are declining.
“It’s a lot worse than the public thinks,” said Nate Mantua, an associate research professor at the University of Washington’s Climate Impacts Group.
Mantua, who’s read the report, said it was clear what was causing the oceans’ problems: greenhouse gases. “It is not a mystery,” he said.
There’s growing concern about low-oxygen or no-oxygen zones appearing more and more regularly off the Northwest coast, Mantua said. Scientists are studying the California Current along the West Coast to determine whether it could be affected, he added.
~ by Eric Harrington on July 17, 2010.
Posted in Current Events, Environment, Previously Published Tags: Oceans are dying, oceans reach tipping point Leave a Reply Cancel reply Enter your comment here...
WTC collapse 421 Subscribers July 2010 | 科技 |
2016-40/3983/en_head.json.gz/10368 | Spot update on Fukushima nuclear crisis and radiation contamination crisis (Nov 27, 2011)
November 28, 2011 in 1 Nov 28 breaking news update: Fukushima Daiichi reactor no. 2 did not explode, cause of release of radiation still not known (NHK)…
Academic society set up to study decontamination (NHK, Nov 28)
A group of researchers has set up an academic society in the hope of helping on-going efforts to remove radioactive materials caused by the trouble at the Fukushima Daiichi nuclear plant.
Researchers in a wide range of fields, including atomic energy and nuclear waste, jointly launched the society at a meeting in Tokyo on Monday.
Ehime University visiting professor Masatoshi Morita, an expert on environmental pollution, said progress has been slow in decontamination efforts centering on Fukushima Prefecture.
He emphasized the need for the cooperation of various types of specialists to study technologies that would be effective in cleaning up radioactive contamination.
A Japan Atomic Energy Agency official in charge of decontamination in Fukushima noted that radioactive contamination levels on houses near forests are difficult to reduce because of the radioactivity that adheres to trees.
The society hopes to come up with recommendations for municipal authorities making decontamination efforts.
Cesium from Fukushima plant fell all over Japan (Asahi, November 27, 2011)
“Radioactive substances from the crippled Fukushima No. 1 nuclear power plant have now been confirmed in all prefectures, including Uruma, Okinawa Prefecture, about 1,700 kilometers from the plant, according to the science ministry. (Asahi, November 27)
Radioactive substances from the crippled Fukushima No. 1 nuclear power plant have now been confirmed in all prefectures, including Uruma, Okinawa Prefecture, about 1,700 kilometers from the plant, according to the science ministry.
The ministry said it concluded the radioactive substances came from the stricken nuclear plant because, in all cases, they contained cesium-134, which has short half-life of two years.
Before the March 11 Great East Japan Earthquake, radioactive substance were barely detectable in most areas.
But the Ministry of Education, Culture, Sports, Science and Technology’s survey results released on Nov. 25 showed that fallout from the Fukushima plant has spread across Japan. The survey covered the cumulative densities of radioactive substances in dust that fell into receptacles during the four months from March through June.
Figures were not available for Miyagi and Fukushima prefectures, where the measurement equipment was rendered inoperable by the March 11 disaster.
The ministry also said Nov. 25 that it will conduct aerial measurements of cesium accumulations in soil in regions outside the 22 prefectures starting next year.
The highest combined cumulative density of radioactive cesium-134 and cesium-137 was found in Hitachinaka, Ibaraki Prefecture, at 40,801 becquerels per square meter. That was followed by 22,570 becquerels per square meter in Yamagata, the capital of Yamagata Prefecture, and 17,354 becquerels per square meter in Tokyo’s Shinjuku Ward….”
Read more at ajw.asahi.com
Towns avoid govt help on decontamination (Yomiuri, Nov. 28, 2011)
MAEBASHI–Municipalities contaminated with radiation as a result of the crisis at the Fukushima No. 1 nuclear power plant are concerned that the central government’s plan to designate municipalities for which it will shoulder the cost of decontamination will stigmatize those communities, according to a Yomiuri Shimbun survey.
As early as mid-December, the government plans to begin designating municipalities that will be subject to intensive investigation of their contamination, which is a precondition for the government paying for decontamination in place of the municipalities.
Municipalities with areas found to have a certain level of radiation will be so designated. The aim of the plan is to promote the thorough cleanup of contaminated cities, towns and villages, including those outside Fukushima Prefecture.
However, many local governments are reluctant to seek such designation, fearing it may give the false impression that the entire municipality is contaminated.
Based on an aerial study of radiation conducted by the Education, Culture, Sports, Science and Technology Ministry in mid-September, municipalities in Tokyo and Miyagi, Fukushima, Ibaraki, Tochigi, Gunma, Saitama and Chiba prefectures were candidates for the government designation.
The aerial study examined radiation in the atmosphere one meter above the ground. Municipalities with areas where the study detected at least 0.23 microsieverts of radiation were listed as candidates. About 11,600 square kilometers of land, equivalent to the size of Akita Prefecture, reached that level, the ministry said.
The Yomiuri Shimbun has asked municipalities in the prefectures–excluding Fukushima Prefecture–whether they would seek the government designation as municipalities subject to intensive investigation of radiation contamination. Fifty-eight of the cities, towns and villages that responded to the survey said they would seek the designation.
Almost all the municipalities in Gunma and Ibaraki prefectures had areas where radiation in excess of the government standard was detected. However, only 10 municipalities in Gunma Prefecture and 19 in Ibaraki Prefecture said they would seek the designation.
The figures represent only about 30 percent of the municipalities in Gunma Prefecture and about 40 percent of those in Ibaraki Prefecture.
The Maebashi municipal government said it would not request the designation.
In late August, radioactive cesium exceeding the government’s provisional regulatory limit was detected in smelt caught at Lake Onuma, located on the summit of Mt. Akagi in northern Maebashi. The opening of the lake’s fishing season for smelt has been postponed.
Usually, the lake would be crowded with anglers at this time of year, but few people are visiting this season.
However, in most of Maebashi, excluding mountainous regions, the radiation detected in the September study was below the regulatory limit.
“If the government designates our city [as subject to intensive investigation of radiation contamination], the entire city will be seen as contaminated. We decided to avoid such a risk,” a senior municipal government official said.
The Maebashi government wants to prevent the city’s tourism and agriculture from being damaged further, the official added.
Daigomachi in Ibaraki Prefecture, a city adjacent to Fukushima Prefecture, said the city has also refrained from filing for the designation. Usually about 700,000 people visit Fukuroda Falls, the city’s main tourist destination, every year, but the number has dropped to half since the nuclear crisis began, the town said.
“If our town receives the designation, it may deliver a further blow to our image, already damaged by radiation fears,” an official of the town’s general affairs department said.
In recent months, citizens in the Tokatsu region of northwest Chiba Prefecture have held protests demanding local governments immediately deal with areas where relatively high levels of radiation were detected. All six cities in the region, including Kashiwa, said they would file requests for the government designation. The Kashiwa municipal government said it had already spent about 180 million yen on decontamination.
“People are loudly calling for decontamination. We hope that the designation will eventually lower the cost of decontamination,” an official of the municipal government’s office for measures against radiation said.
Observers have said one of the reasons the six cities decided to request the designation was their low dependence on agriculture and other primary industries that are vulnerable to fears of radiation.
Kobe University Prof. Tomoya Yamauchi, an expert on radiation metrology, said: “It will be a problem if decontamination activities stall due to local governments’ fears of stigmatization. To prevent misunderstanding of radiation, the government needs to do more to disseminate correct information.”
“The Hiroshima Syndrome” is a website that has interesting posts about growing radiophobia in Japan and traces some rumours and shows how some rumours are generated, click here to read about the issue.
RELIVING THE HORRORS OF HIROSHIMA AND NAGASAKI IN 2011 by Lester R. Brown (NPQ, Nov 22)
“Because of soil contamination, one-eighth of Fukushima’s soil can never be plowed again, and the consumption of crops grown on such plots is strictly forbidden. Many local companies have gone bankrupt, while 20,000 individual proprietors are on the brink of insolvency. The Tokyo Electric Power Company recently laid off 7,400 employees due to the cash settlements it will pay to the victims of the nuclear accident. Though the company is still afloat, it’s expected to soon go under due to its enormous capital investment in nuclear power, which now faces an uncertain fate in Japan and elsewhere.
At the risk of being melodramatic, the ripple effects of Fukushima go well beyond northern Japan. …Read more here.
Letters from tsunami-affected students tell of bullying, lingering stress (Mainichi, Nov 28)
Children forced by the Great East Japan Earthquake and accompanying disasters to change schools have written into a government-run counseling service telling of bullying and lingering stress from the disasters.
The service was started in 2006 by the Ministry of Justice (MOJ)’s Human Rights Bureau at elementary and junior-high schools around the country. A student can write on a prepared letter about problems that they can’t talk to friends or family about, post it in a mailbox, and the letter is delivered to the nearest legal affairs bureau. Government employees or volunteers write back and work with schools or youth counseling services as necessary.
According to the Human Rights Bureau, there were over 1,100 letters sent in this year between April and September, with around 20 being related to the Great East Japan Earthquake.
One student from the northeast Tohoku region wrote, “I was at school when the tsunami hit. Now I attend a different school, but I feel that I’m being ostracized.” Another student wrote, “I’ve been bullied at the school I transferred to. I can’t talk to any teachers about it. A bully even said to me, ‘Too bad you didn’t die in the tsunami.'”
“I constantly wonder why my family had to die and can’t focus on studying,” wrote another student. “My father who was living in Tohoku died in the tsunami, and I can’t accept it as reality,” wrote another.
One student in the Kanto region — which includes Tokyo — wrote, “I can’t drink the water because I’m worried about radiation.”
Through October and November, the Human Rights Bureau has been distributing the counseling letters to all elementary and junior-high schools across the nation. Kiyoko Yokata of the MOJ’s Human Rights Bureau says, “We are asking workers and volunteers to think carefully of the letter writers’ feelings when writing their responses. To protect the human rights of children, we will respond to the letters with care.”
Tsunami-hit city in Iwate delivers free schoolbags to new elementary students (Mainichi Japan) November 28, 2011
RIKUZENTAKATA, Iwate — This tsunami-hit city in Iwate Prefecture delivered on Nov. 27 schoolbags to children preparing to enter elementary school next spring for free thanks to the generosity of people from around the country.
The city’s education board handed out about 90 school bags to kindergartners out of about 150 potential first graders.
About 600 school bags were delivered to the former Yahagi Elementary School, and many children and their parents stood in line to receive the bags. Kota Maeda, 5, broke into a broad smile as he picked up and carried schoolbag of his favorite color.
Mana Takahashi, 6, who lives in a temporary house in the city’s Kesencho district, chose a pink school bag. “I want to study hard at school,” she said.
More cesium in Fukushima rice (Yomiuri, Nov.27)
FUKUSHIMA–The Fukushima prefectural government has announced that radioactive cesium beyond the provisional regulatory limit was detected in unmilled rice harvested at five farms in the Onami district of Fukushima Prefecture.
Radioactive cesium exceeding the limit of 500 becquerels per kilogram was recently detected in harvested rice at another farm in the area, fueling concerns among consumers.
This time as much as 1,270 becquerels of radioactive cesium per kilogram was detected in unmilled rice, the prefecture said Friday. The rice has not been shipped to the market. Instead, it was stored in farmers’ warehouses or a local agricultural cooperative, or was distributed to farmers’ relatives.
The prefectural government is currently analyzing all the rice grown by the 154 rice farms in the district, or 4,752 bags containing 30 kilograms of rice each. It has finished inspecting 864 rice bags from 34 farms so far.
Apart from the first farm where rice was found to have been contaminated, excess radioactive cesium has been detected in 103 rice bags from five farms.
Excess cesium was detected in all 24 rice bags from the farm that produced rice in which radioactive cesium at 1,270 becquerels per kilogram was found. The minimum level of contamination at that farm was 970 becquerels per kilogram.
Radioactive cesium between 540 and 1,110 becquerels per kilogram was detected from unmilled rice from another farm, according to the prefectural government.
The five farms are located from one to 2.5 kilometers from the first farm in question. They have nothing in common with the first farm topographically, such as using the same freshwater from a mountain in their rice paddies.
In addition to the Onami district, the prefectural government is inspecting rice harvested in Date, which includes some hot spots recommended for evacuation, and in three other cities–Fukushima, Soma and Iwaki–which include areas with relatively high levels of radiation.
The local government plans to compile all results by mid-December.
Local govts struggling (Yomiuri, Nov.28)
Tsunami probability raised to 30% (Japan Times, Nov 27)
Much earlier: Remembering Chernobyl (Daily Bruin)
Thinking back to 20 years ago, it’s the splashing in yellow rainwater that Antonina Sergieff vividly recalls.
The third-year graduate student didn’t know it then, but the unnatural color of those puddles in her hometown of Gomel, Belarus were due to radioactive particles spewing from a nuclear explosion 80 miles away.
Surrounded by ancient pine forests, the Chernobyl nuclear power station exploded during the early morning hours of April 26, 1986, setting off a raging radioactive fire that expelled over 190 tons of toxic material into the atmosphere.
Today, on the 20th anniversary of the incident, the Russian languages and literatures student can look back to the explosion and accept a childhood surrounded by radioactive contamination.
“We all jumped in the puddles with the yellow stuff. … You don’t see (it in) the air, it doesn’t materialize. But when you see the yellow dust, you see radiation,” Sergieff said.
The accident was originally caused by a small testing error that resulted in a chain reaction in which highly pressurized steam literally blew the top off of a nuclear reactor.
The result was the release of 100 more times radiation than the atomic bombs dropped on Hiroshima and Nagasaki, according to the United Nations issue brief on Chernobyl.
Among the unstable elements released were iodine-131, caesium-137, strontium-90 and plutonium-239. Scientists say that exposure to such elements, especially in such high doses, impairs critical cellular functions and damages DNA.
When these elements first reached Sergieff 20 years ago, they came in the form of yellow rain.
It was not long after that residents in her hometown knew it wasn’t simply “pollen” – which is what government officials assured them, she said.
Soon, people started losing their hair, pictures of deformed animals sprouted up in independent newspapers, and incidences of cancer in Belarus skyrocketed, Sergieff said.
According to the U.N. brief, cases of breast cancer in Belarus doubled between 1988 and 1999, among other increases.
Share this:TumblrLinkedInGoogleEmailPocketPinterestMoreTwitterRedditFacebookPrintLike this:Like Loading...
« Spot news update on Fukushima nuclear crisis and radiation contamination situation (Nov 26)
Secrets of the Super Student: Speed Reading and Slow Reading Techniques in Japan » | 科技 |
2016-40/3983/en_head.json.gz/10399 | Global Water Sustainability Flows Through Natural and Human Challenges
Water's fate in China mirrors problems across the world: fouled, pushed far from its natural origins, squandered and exploited.
In this week's Science magazine, Jianguo "Jack" Liu, director of Michigan State University's Center for Systems Integration and Sustainability, and doctoral student Wu Yang look at lessons learned in China and management strategies that hold solutions for China -- and across the world. In their article "Water Sustainability for China and Beyond," Liu and Yang outline China's water crisis and recent leapfrog investment in water conservancy, and suggest addressing complex human-nature interactions for long-term water supply and quality. China's crisis is daunting, though not unique: Two-thirds of China's 669 cities have water shortages, more than 40 percent of its rivers are severely polluted, 80 percent of its lakes suffer from eutrophication -- an over abundance of nutrients -- and about 300 million rural residents lack access to safe drinking water. Water can unleash fury. Floods in Beijing on July 21 overwhelmed drainage systems, resulting in scores of deaths. Water shortages also may have contributed to recent massive power outages in India as rural farmers stressed a fragile grid by pumping water for irrigation during drought. China has dedicated enormous resources -- some $635 billion worth -- which represents a quadrupling of investment in the next decade, mainly for engineering measures. There needs to be, Liu and Yang say, a big picture view of water beyond engineering measures. "There is an inescapable complexity with water," Liu said. "When you generate energy, you need water; when you produce food, you need water. However, to provide more water, more energy and more land are needed, thus creating more challenges for energy and food production, which in turn use more water and pollute more water. "In the end, goals are often contradictory to each other. Everybody wants something, but doesn't take a systems approach that is essential for us to see the whole picture." Liu, who holds the Rachel Carson Chair in Sustainability, is a pioneer in using a holistic approach to address complex human-environmental challenges. Solutions, he says, come from looking at issues from multiple points of view at the same time. That's the way to avoid the unintended consequences that plague China's water, and a way to prevent a water crisis from becoming a water catastrophe. In their article, they give suggestions to make the grand investment more effective and efficient, strategies that apply not only to China, but resonate on a global level. For example: •Shore up laws and policies with cross-organizational coordination to clarify who is in charge, and who has enforcement powers.•Get proactive., Evaluate initiatives, set performance criteria and engage the public in planning•Use social science to strengthen long-term plans by predicting people's behaviors and taking values into account.•Remember the world has become much smaller, and global connections such as trade and sharing of international rivers have great impacts on water sustainability and quality. At the core, they say, is the understanding of complex human-nature interactions. Such understanding is key to achieve water sustainability in China and the rest of the world. | 科技 |
2016-40/3983/en_head.json.gz/10400 | Making mobile work; success strategies revealed
By Alice LipowiczJul 29, 2011
The Smithsonian Institution has had to think creatively about how to offer access to its scientific, historic and cultural information on mobile applications, an official said.
“The depth of our content is very challenging to present through the small screen,” Nancy Proctor, director of mobile strategy and initiatives for the Smithsonian, said July 28 in a Web presentation sponsored by the General Services Administration.
With 4 billion cell phones in operation in the world, including more than a billion smart phones, the Smithsonian can't ignore those audiences, she said.
Related stories: Smithsonian CTO offers tips to federal knowledge workers
Smithsonian boosts online sharing with Web 2.0
About 40 percent of the Smithsonian’s visitors have a smart phone, Proctor said. While viewing the collections, those visitors can access applications on smart phones that supplement, and in some cases replace, other tools such as museum audio tours. “Mobile learning is where we will be focusing attention,” she said. Mobile also is a disruptive technology and can transform society because of its dual nature as a personal and social tool, Proctor added.
She gave as an example a protester at the National Portrait Gallery in 2010 who displayed a personal iPad playing an art video by David Wojnarowicz that had been banned from public exhibit in that gallery.
Although that exhibit and the protest were controversial, there is potential to take advantage of the disruptive nature of mobile in a positive way. “We do not want to just repackage and rebroadcast our messages,” Proctor said. “We want to think about what is special about mobile.”
Sometimes the “special” nature of an application is not obvious until people start using it, she added. For example, with the Smithsonian’s Leafsnap mobile application available on iPhone and iPad, users can check a database of photographs of tree leaves to identify the tree species. It was co-developed with Columbia University and the University of Maryland. Although it was initially intended as a research project for scientists, it has been popular with the public as well, Proctor said.
“When you go for goal A, sometimes you get goal B,” Proctor said. “Leafsnap is not just about adding to the database for research or a bar code scanner. It is really getting you to look at trees, at the bark and the berries. It fulfills a learning experience."
“Mobile helps the Smithsonian Institution overall in its strategy and principles, including providing broader access, educating, connecting communities and strengthening collections,” Proctor said.
Additional goals for the mobile strategy include equipping Smithsonian staffers with leading-edge tools to stay in the forefront of their fields, updating users’ experience of the Smithsonian and developing metrics for accessibility, quality, relevance, sustainability and accountability of the Smithsonian.
The institution has 14 mobile applications, including seven that are accessed on mobile websites. They include the “GoSmithsonian” information guide to visiting the museums, “Chandra Mobile” news from the Chandra X-Ray Observatory and the “MeAnderthal” application to mash up users’ photos with images of cavemen and -women.
One of the more difficult parts of developing a mobile strategy was to balance a need for centralized standards and policies with a need for autonomy by the individual museums, Proctor said.
On the Web, the Smithsonian museums had a lot of autonomy and their websites each have a different look and feel. In mobile, the goal is to have a more unified look to the Smithsonian offerings, Proctor said.
Some of the tools being explored are Extensible Markup Language and open-source Drupal code to ensure that content is platform-agnostic as much as possible, she added. E-Mail this page | 科技 |
2016-40/3983/en_head.json.gz/10411 | Ideological Cartography
Forecasting Ideology in the 113th Congress (2013-14)
Posts Tagged ‘Sergey Brin’ The University of Google: Was the decision to exit China ideological or business as usual?
Adam Bonica Leave a comment
After years of unease about the Chinese government’s censorship policies, Google announced this month that it would be shutting down its Internet search service in Mainland China, citing the recent cyber attacks on its systems, widely believed to have been orchestrated by the Chinese government, as “the straw that broke the camel’s back.” Instead of selling off their existing Chinese operations to the highest bidder and leaving town, Google pursued the more risky strategy of rerouting searches through its Hong Kong servers, which remain free from government censors.
Speculation surrounding Google’s motives seems to have generated as much media attention as the business implications of the fallout. While watching the story develop during the past week, I came up with three general perspectives on why Google really left China:
1) Google is making a principled stand against the Chinese government’s censorship and repression of free speech at the cost of its bottom-line;
2) Google is pursuing a strategy that appears costly in the near term but will benefit its business model in the long run;
3) Google was tired of being pushed around by what it saw as an increasingly hostile foreign government.
While it may be tempting to dismiss the first account as overly naïve, I think we would be remiss to discount the possibility that Google’s ideology was central to its decision. There are two reasons that I can think of to support this claim. The first is that Google has been upfront from the beginning that its decision had a major ideological component. In an interview with the Wall Street Journal, Google co-founder Sergey Brin invoked his personal experience with totalitarianism to characterize the decision as a principled stand against Internet censorship and government surveillance.
The second reason is that, despite its status as a Fortune 500 company, Google is among a growing group of powerful corporations closely aligned with the left. Based on my estimates, Google’s employees are the fourth most liberal of any U.S. corporation, behind Genentech, Apple Inc. and Starbucks (See figure above or click here to view in table format). A table with the To put this in perspective, consider that during the 2008 election cycle, Google employees raised $20,800 for John McCain and $55,451 for Ron Paul; compared to $89,300 for Hillary Clinton and an astounding $803,436 for Barack Obama. In fact, in terms of political contributions, the employees at these firms more closely resemble faculty at liberal universities than traditional Fortune 500 corporations. In some ways, this makes sense. With the exception of Starbucks, each of the most liberal firms, in large part, deals in research and innovation, and each has a reputation of actively recruiting Ph.Ds.
This is the where I would usually caution against putting too much stock into what the ideology of employees reveal about a corporation’s decision-making, because it is the board of directors that ultimately decides issues of this magnitude. However, Google’s board of directors also appears to be decidedly liberal.
In total, seven members of Google’s board have contribution records that can be used to gauge their ideology. Two of the board’s members, Eric Schmidt (-0.71; view records) and John Doerr (-0.72; view records) are major fundraisers for the Democratic Party and act as advisors to the Obama Administration. Ram Shriram (-1.78; view records) and Princeton University President Shirley Tilghman (-1.43; view records) are less active contributors but have contributed exclusively to Democrats. The only political contributions made at the federal level by Google co-founders Sergey Brin (view records) and Larry Page (view records) have been to Google’s corporate PAC. However, each has contributed heavily to California ballot initiatives. Although their ideological estimates cannot be directly compared to estimates recovered from FEC data, an ideological mapping of contributions in California places Brin and Page to the left of the average Democratic candidate for the California State Assembly. Lastly, Intel CEO Paul Otellini (0.02; view records) has given in roughly equal proportions to both parties.
Ideology aside, there is a case to be made that Google’s business model thrives on free, open and democratic societies, which is one of the arguments Google has used to justify its exit from China. It has even gone so far as to suggest to Congress that the U.S. should consider withholding development aid to counties that restrict access to certain websites. This has some merit, but it sounds suspiciously like a post-hoc rationale. While censorship is harmful to Google’s business objectives—especially the wholesale blocking of YouTube and Blogger—it is difficult to see how such laws are of a different nature than the types of regulation imposed on other industries. When faced with unfavorable regulation, as long as their operations remain profitable, corporations usually respond either by meeting the minimal requirements for compliance and lobbying for reform, or when feasible, moving their operations out of state or off-shores, which is essentially what Google ended up doing.
In the end, Google’s decision to leave China was likely a mixture of ideological and business considerations; which I think makes it much more likely that Google could actually begin exerting influence over U.S. foreign policy. A corporation’s efforts to influence policy are often most potent when its ideology and profit motives align. Perhaps the best example of this during the past century was the United Fruit Co., whose distaste for leftist regimes was matched only by its profit motive. It is the United Fruit Co. and its infamous history of involvement in Latin American politics to which we owe the term banana republic. The more idealistic view is that as Google expands, it will leave a handful of liberated Google republics in its wake. This is not the first time Google has openly defied a foreign regime. Last summer, Google rushed its Farsi translation tool to market in response to the pro-democracy protests. Then again, unlike the Chinese case, Google had little to lose by angering the Iranian government. It is an admirable thought, but making enemies of governments, foreign or domestic, has not yet proven viable as a long-term business strategy—and even Google is unlikely to change that.
Tags: Apple, business and politics, campaign contributions, China, Corporations, Eric Schmidt, Fortune 500, Genentech, Google, ideology, Larry Page, political donations, Sergey Brin, Silicon Valley
Recent Posts Update: Measuring up the Republican Field
Small Donors and Polarization
Heckle and Prosper
Measuring up the Republican Field
The Changing Politics of Health Care Professionals
Abrahamson
business and politics
Congressional Voting
DW-NOMINATE
election forecasting
Health Care Professionals; politics of health care
House of Representative
individual contributors
Judicial Ideology
Kloppenburg
occupational categories
Republican Presidential Primaries
Roggensack
Senator Bayh
state supreme courts
Wisconsin budget battle
Wisconsin Supreme CourtBlogroll
National Institute for Money in State Politics
VoteView.com
Blog at WordPress.com. Ideological Cartography Blog at WordPress.com. Post to | 科技 |
2016-40/3983/en_head.json.gz/10594 | Rebecca Jeschke Media Relations Director and Digital Rights Analyst [email protected] @effraj PGP Key +1 415 436 9333 x125 Rebecca Jeschke is EFF's Media Relations Director and a Digital Rights Analyst, fielding press requests on a broad range of issues including privacy, free speech, and intellectual property matters. Her media appearances include Fox News, CNN, NPR, USA Today, New York Times, Washington Post, Associated Press, and Harper's Magazine, and she has been a presenter at South by Southwest. Before joining EFF in 2005, Rebecca worked in television and Internet news for more than ten years, including stints as an Internet producer for CBS 5 in San Francisco and as a senior supervising producer for TechTV. She has also been a travel guide editor, an English teacher in the Dominican Republic, and a worker on a "slime line" gutting fish in Alaska. Rebecca has a Bachelor of Arts in English and American Literature and Language from Harvard University.
Deeplinks Posts by Rebecca Jeschke
Press Releases by Rebecca Jeschke
February 5, 2007 EFF Battles Gambit to Freeze Telecom Surveillance Cases EFF filed suit more than a year ago against AT&T, accusing the telecom giant of collaborating with the NSA's illegal spying program. Despite Judge Vaughn Walker decision rejecting their motions to dismiss in July, both the government and AT&T are still working to stall progress in the case. November 9, 2006 Government Wants Stay in AT&T Case The U.S. government has asked for a stay in our case against AT&T for collaborating with the NSA in illegal spying on its customers. The government also wants to halt proceedings in the other class action cases against other telecommunication companies until the U.S. November 8, 2006 E-Voting Problems in Tight Florida Race A detailed report in Wednesday's Sarasota Herald-Tribune raises some important questions about touchscreen voting and a congressional race there that could be missing votes. October 2, 2006 Californians Lose Out on New RFID Safeguards Last month, California's state legislature passed a bipartisan, groundbreaking new law that would institute tough privacy safeguards for Radio Frequency Identification RFID chips embedded in state identification cards. July 12, 2006 Apple Won't Appeal -- Online Journalists' Source Protection Stands EFF confirmed today through court filings that Apple Computer will not appeal a May ruling that secured the reporter's privilege for online journalists in California. June 23, 2006 EFF Battles Government's Motion to Dismiss AT&T Surveillance Case EFF went to court today to tell a federal judge that the government should not be allowed to use the "state secrets privilege" to preempt our class-action lawsuit against AT&T. April 20, 2006 EFF Stands Up for Online Journalists' Rights in Apple v. Does Today, EFF Staff Attorney Kurt Opsahl argued the critical issues in Apple v. Does before a San Jose, California appeals court, telling a panel of three judges that denying confidential source protection to journalists -- whether online or offline -- would deliver a dangerous blow to all media. February 15, 2006 Time to Settle Up with Sony BMG If you were upset about Sony BMG's dangerous digital rights management (DRM) released in millions of CDs last year, now is the time show that you care. The settlement process has begun in EFF's class action lawsuit against the entertainment giant. Music fans who bought the affected CDs can submit claims for clean CDs. January 18, 2006 Update: Apple Makes iTunes Tweaks After Internet Uproar Last week, we told you about a troublesome "phone home" feature in iTunes MiniStore -- one of the new "improvements" in iTunes announced at MacWorld. December 15, 2005 "Good News for Music/Lyrics Fans After All?" That's the question that pearLyrics asked today on its homepage. But the cautious optimism from developer Walter Ritter comes after a rough week. Pages« first
August 21, 2013 Late Digital Rights Activist, International Access to Knowledge Advocate, and NSA Spying Journalists Win EFF Pioneer Awards San Francisco - The Electronic Frontier Foundation (EFF) is pleased to announce the distinguished winners of the 2013 Pioneer Awards: late digital rights activist Aaron Swartz, international access to knowledge advocate James Love, and Glenn Greenwald and Laura Poitras – the journalists behind the blockbuster stories detailing extensive spying by the U.S. National Security Agency (NSA). August 9, 2013 Judge Grants Preliminary Injunction to Protect Free Speech after EFF Challenge Newark, NJ - A New Jersey federal district court judge granted motions for a preliminary injunction today, blocking the enforcement of a dangerous state law that would put online service providers at risk by, among other things, creating liability based on "indirect" publication of content by speech platforms. July 31, 2013 Huge Global Coalition Stands Against Unchecked Surveillance San Francisco - More than 100 organizations from across the globe – including Privacy International, Access, and the Electronic Frontier Foundation (EFF) – are taking a stand against unchecked communications surveillance, calling for the governments around the world to follow international human rights law and curtail pervasive spying. July 16, 2013 Unitarian Church, Gun Groups Join EFF to Sue NSA Over Illegal Surveillance San Francisco - Nineteen organizations including Unitarian church groups, gun ownership advocates, and a broad coalition of membership and political advocacy organizations filed suit against the National Security Agency (NSA) today for violating their First Amendment right of association by illegally collecting their call records. July 8, 2013 Federal Judge Allows EFF's NSA Mass Spying Case to Proceed San Francisco - A federal judge today rejected the U.S. government's latest attempt to dismiss the Electronic Frontier Foundation's (EFF's) long-running challenge to the government's illegal dragnet surveillance programs. Today's ruling means the allegations at the heart of the Jewel case move forward under the supervision of a public federal court. June 27, 2013 Internet Archive Sues to Stop Dangerous New Jersey Law Putting Online Service Providers at Risk Newark, NJ - The Internet Archive has filed a new legal challenge against a New Jersey state law that aims to make online service providers criminally liable for providing access to third parties' materials, conflicting directly with federal law and threatening the free flow of information on the Internet. A hearing on the Internet Archive's request for a preliminary injunction against the law is set for 10am Friday at the federal courthouse in Newark. June 27, 2013 Renowned Security Expert Bruce Schneier Joins EFF Board of Directors San Francisco - The Electronic Frontier Foundation (EFF) is honored to announce the newest member of its Board of Directors: renowned security expert Bruce Schneier. June 26, 2013 EFF Sues FBI For Access to Facial-Recognition Records San Francisco - As the FBI is rushing to build a "bigger, faster and better" biometrics database, it's also dragging its feet in releasing information related to the program's impact on the American public. In response, the Electronic Frontier Foundation (EFF) today filed a lawsuit to compel the FBI to produce records to satisfy three outstanding Freedom of Information Act requests that EFF submitted one year ago to shine light on the program and its face-recognition components. June 26, 2013 EFF Throttles Notorious Patent Used to Threaten Public Transit Systems San Francisco - The Electronic Frontier Foundation (EFF) has throttled a notorious patent used to wrongfully demand payment from cities and other municipalities that use tracking systems to tell transit passengers if their buses and trains are on time. June 4, 2013 EFF Urges Appeals Court to Affirm Libraries' Right to Digitize Books San Francisco - The Electronic Frontier Foundation (EFF) urged an appeals court today to affirm that the fair use doctrine protects the creation of an invaluable digital library. Pages« first
Donate to EFF Stay in Touch Email Address | 科技 |
2016-40/3983/en_head.json.gz/10597 | Lessons from oil industry may help address groundwater crisis
CORVALLIS, Ore. - Although declining streamflows and half-full reservoirs have gotten most of the attention in water conflicts around the United States, some of the worst battles of the next century may be over groundwater, experts say - a critical resource often taken for granted until it begins to run out.
Aquifers are being depleted much faster than they are being replenished in many places, wells are drying up, massive lawsuits are already erupting and the problems have barely begun. Aquifers that took thousands of years to fill are being drained in decades, placing both agricultural and urban uses in peril. Groundwater that supplies drinking water for half the world's population is now in jeopardy.
A new analysis by researchers at Oregon State University outlines the scope of this problem, but also points out that some tools may be available to help address it, in part by borrowing heavily from lessons learned the hard way by the oil industry.
"It's been said that groundwater is the oil of this century," said Todd Jarvis, associate director of the Institute for Water and Watersheds at OSU. "Part of the issue is it's running out, meaning we're now facing 'peak water' just the way the U.S. encountered 'peak oil' production in the 1970s. But there are also some techniques developed by the oil industry to help manage this crisis, and we could learn a lot from them."
Jarvis just presented an outline of some of these concepts, called "unitization," at a professional conference in Kyoto, Japan, and will also explore them in upcoming conference in Stevenson, Wash., and Xi'an, China. Other aspects of the issue have been analyzed in a new documentary film on the special problems facing the Umatilla Basin of eastern Oregon, a classic case of declining groundwater problems. (DVD copies of the documentary are available free upon request, by calling 541-737-4032.)
The problems are anything but simple, Jarvis said, and are just now starting to get the attention needed.
"In the northern half of Oregon from Pendleton to the Willamette Valley, an aquifer that took 20,000 years to fill is going down fast," Jarvis said. "Some places near Hermiston have seen water levels drop as much as 500 feet in the past 50-60 years, one of the largest and fastest declines in the world.
"I know of a well in Utah that lost its original capacity after a couple years," he said. "In Idaho people drawing groundwater are being ordered to work with other holders of stream water rights as the streams begin to dwindle. Mississippi has filed a $1-billion lawsuit against the City of Memphis because of declining groundwater. You're seeing land subsiding from Houston to the Imperial Valley of California. This issue is real and getting worse."
In the process, Jarvis said, underground aquifers can be irrevocably damaged - not unlike what happened to oil reservoirs when that industry pumped them too rapidly. Tiny fractures in rock that can store water sometimes collapse when it's rapidly withdrawn, and then even if the aquifer had water to recharge it, there's no place for it to go.
"The unitization concept the oil industry developed is built around people unifying their rights and their goals, and working cooperatively to make a resource last as long as possible and not damaging it," Jarvis said. "That's similar to what we could do with groundwater, although it takes foresight and cooperation."
Water laws, Jarvis said, are often part of the problem instead of the solution. A "rule of capture" that dates to Roman times often gives people the right to pump and use anything beneath their land, whether it's oil or water. That's somewhat addressed by the "first in time, first in right" concept that forms the basis of most water law in the West, but proving that someone's well many miles away interferes with your aquifer or stream flow is often difficult or impossible. And there are 14 million wells just in the United States, tapping aquifers that routinely cross state and even national boundaries.
Regardless of what else takes place, Jarvis said, groundwater users must embrace one concept the oil industry learned years ago - the "race to the pump" serves no one's best interest, whether the concern is depleted resources, rising costs of pumping or damaged aquifers. One possible way out of the conundrum, experts say, is maximizing the economic value of the water and using it for its highest value purpose. But even that will take new perspectives and levels of cooperation that have not often been evident in these disputes. Government mandates may be necessary if some of the "unitization" concepts are to be implemented. Existing boundaries may need to be blurred, and ways to share the value of the remaining water identified.
"Like we did with peak oil, everyone knows were running out, and yet we're just now getting more commitment to alternative energy sources," Jarvis said. "Soon we'll be facing peak water, the only thing to really argue over is the date when that happens. So we will need new solutions, one way or the other."
Editor's Notes: A digital image to illustrate this story can be found at this URL: http://www.flickr.com/photos/oregonstateuniversity/4055326017/
Todd Jarvis
[email protected]
@oregonstatenews
http://www.orst.edu More on this News Release
4th International Symposium in RIHN
GEOLOGY/SOIL
http://oregonstate.edu/ua/ncs/archives/2009/oct/hard-lessons-oil-industry-may-help-address-burgeoning-groundwater-crisis Breaking News | 科技 |
2016-40/3983/en_head.json.gz/10599 | Public Release: 3-Jun-2013
Threatened frogs palmed off as forests disappear
Zoological Society of London
Oil palm plantations in Malaysia are causing threatened forest frogs to disappear, paving the way for common species to move in on their turf, scientists have revealed.
The study, carried out by the Zoological Society of London (ZSL) describes how forests converted to palm oil plantations are causing threatened forest dwelling frogs to vanish, resulting in an overall loss of habitat that is important for the conservation of threatened frog species in the region.
Scientists travelled to Peninsular Malaysia where they spent two years studying communities of frog species in four oil palm plantations and two areas of adjacent forest. The paper is published in the journal Conservation Biology.
Aisyah Faruk, PhD student at ZSL's Institute of Zoology says: "The impact we observed is different from that observed previously for mammals and birds. Instead of reducing the number of species, oil palm affects amphibian communities by replacing habitat suitable for threatened species with habitat used by amphibian species that are not important for conservation. This more subtle effect is still equally devastating for the conservation of biodiversity in Malaysia."
Amphibians are the most threatened vertebrates in the world, with over 40% at risk of extinction. The peat swamp frog (Limnonectes malesianus) is just one of the declining species threatened due to deforestation. It inhabits shallow, gentle streams, swampy areas, and very flat forests, laying eggs in sandy streambeds. Scientists only found this species in forest areas, and if palm oil plantations continue to take over, the peat swamp frog, along with its forest home, could be a thing of the past.
ZSL's Dr. Trent Garner, a co-author on the paper, says: "Existing practices in managing oil palm are not accommodating the highly threatened forest frog species in Malaysia which urgently need saving."
The planting of oil palm plantations leads to the loss of natural forests and peat lands and plays havoc with ecosystems and biodiversity. ZSL, together with collaborators from Queen Mary University of London, Universiti Kebangsaan Malaysia and University of Malaya, continues to work closely with Malaysian palm oil producers in determining if simple modifications to agricultural practices may bring some of the forest species back into areas planted with oil palm and allow them to survive and reproduce in plantations.
High resolution images:
High resolution images available here: https://zslondon.sharefile.com/d/s731844506f54d57b Media Information For more information please contact Smita Chandra
Interviews: Available with Aisyah Faruk on request
Founded in 1826, the Zoological Society of London (ZSL) is an international scientific, conservation and educational charity whose mission is to promote and achieve the worldwide conservation of animals and their habitats. Our mission is realised through our groundbreaking science, our active conservation projects in more than 50 countries and our two Zoos, ZSL London Zoo and ZSL Whipsnade Zoo. For more information visit http://www.zsl.org
Smita Singh
[email protected]
@OfficialZSL
http://www.zsl.org More on this News Release | 科技 |
2016-40/3983/en_head.json.gz/10663 | New budget to help EPA with environmental and human health protections
On February 13, 2012, the Obama Administration released a proposed budget of $8.344 billion for the U.S. Environmental Protection Agency (EPA), which will be used to continue to ensure environmental and human health protections.
"This budget is focused on fulfilling EPA’s core mission to protect health and the environment for millions of American families," said EPA Administrator Lisa P. Jackson. "It demonstrates fiscal responsibility, while still supporting clean air, healthy waters, and innovative safeguards that are essential to an America built to last."
The proposal includes $755 million in funding for the Superfund Cleanup initiative, which was developed to support cleanup efforts at hazardous waste sites that address emergencies at some of the nation's most highly prioritized cleanup sites.
The budget also provides $576 million to continue research and innovation in areas such as hydraulic fracturing, potential endocrine disruptors, and green infrastructure.
Hazardous waste cleanup can potentially put response workers in danger. The Occupational Safety and Health Administration (OSHA) has several standards regarding the issue, which require all workers wear safety products that can include respiratory protection, safety glasses, and protective clothing. Go Back to Safety News | 科技 |
2016-40/3983/en_head.json.gz/10720 | RTS Expands Operations In Hong Kong With Platform Equinix
Deploys in Equinix IBX Data Center in Hong Kong to Deliver Low Latency Direct Market Access in Asia
HONG KONG, Oct. 11, 2012 /PRNewswire/ -- Equinix, Inc. (NASDAQ: EQIX), the global interconnection and data center company, today announced that RTS Realtime Systems Group, a leading global trading solutions provider, has expanded its operations in Asia by deploying in Equinix's Hong Kong International Business Exchange TM ( IBX®) data center. RTS' extended presence in Hong Kong enables the company to further support low latency trading across asset classes on major exchanges throughout the Asia Pacific region. RTS is already leveraging Platform Equinix TM in Chicago, Frankfurt and New York. Already live, the deployment in Equinix's Hong Kong IBX data center will serve as RTS' third gateway from and to trading communities in the APAC region. It will be linked to RTS' global network, which already provides proximity hosting and direct market access (DMA) to more than 65 exchanges globally. RTS' clients use the RTS network for high-speed, low latency access to multiple asset classes in fully managed environments. With access to the rich financial ecosystem inside Equinix's data centers, RTS' clients can be located in proximity to major exchanges across the globe, while improving risk management and decreasing costs. In Hong Kong, clients will be able to directly access the Hong Kong Mercantile Exchange (HKMEx). Andy Woodhouse, RTS managing director, Asia Pacific, said: "This deployment is yet another exciting development in our rapid rise in Asia Pacific. Hong Kong is strategically important for our growth plans. It is the main gateway into and out of China and a major financial hub. Physical connectivity is key, and through the deployment in Equinix's data center in Hong Kong, we are able to facilitate trading on both the HKMEx and the Hong Kong Exchange, but more importantly on markets all over the world." David Wilkinson, senior director, financial services for Equinix, said: "We are thrilled to further extend our relationship with RTS and help facilitate the company's growth in the Asia Pacific region. RTS' continual expansion with Equinix is a strong testimony to our unparalleled global footprint, covering all 16 of the financial centers in the world and best-in-class connectivity, reliability and security. With more and more global traders flocking to colocate in Asia Pacific, Hong Kong, like Singapore and Tokyo, is expected to catch up with Frankfurt as the place to be for automated traders in the next three years. RTS' presence with Equinix in Hong Kong provides the necessary connectivity infrastructure to minimize latency and enable the fastest access to markets, which supports RTS' high-frequency and robust algorithmic trading solutions." Asia has become an engine for wealth creation and growth. In the past three years, five of the fastest growing economies were in Asia including China, Korea, India, Indonesia and Australia. The International Monetary Fund (IMF) forecasts that developing Asia will continue to act as the global growth generator in 2012 to 2013, persistently growing at five to six percent annually on average(1). Given its global footprint and consistent service level, Equinix allows RTS to quickly and easily expand to take advantage of business opportunities in the region. RTS will also gain direct access to Equinix's growing digital ecosystems of financial services participants and cloud, network and content providers using Platform Equinix. About RTS Realtime Systems Group About RTS Realtime Systems Group RTS ( http://www.rtsgroup.net) delivers high-performance, end-to-end technology products and services across asset classes and continents to elite financial institutions and commodity trading houses. The firm is a global leader in robust electronic trading software, connectivity, hosting, matching and risk management solutions. With standardized low latency connectivity gateways to 135+ exchanges and execution venues worldwide, the firm provides proximity hosting and co-location services to 65+ venues via its global data center network. The RTS infrastructure enables clients to deploy sophisticated trading strategies quickly, securely and cost-effectively throughout multiple trading desks and sites. RTS has offices in Amsterdam, Chicago, Frankfurt, Hong Kong, London, Mumbai, New York, Pune, Singapore and Sydney. About Equinix Equinix, Inc. (Nasdaq: EQIX), connects more than 4,000 companies directly to their customers and partners inside the world's most networked data centers. Today, businesses leverage the Equinix interconnection platform in 38 strategic markets across the Americas, EMEA and Asia-Pacific. www.equinix.com. Forward Looking Statements This press release contains forward-looking statements that involve risks and uncertainties. Actual results may differ materially from expectations discussed in such forward-looking statements. Factors that might cause such differences include, but are not limited to, the challenges of acquiring, operating and constructing IBX centers and developing, deploying and delivering Equinix services; unanticipated costs or difficulties relating to the integration of companies we have acquired or will acquire into Equinix; a failure to receive significant revenue from customers in recently built out or acquired data centers; failure to complete any financing arrangements contemplated from time to time; competition from existing and new competitors; the ability to generate sufficient cash flow or otherwise obtain funds to repay new or outstanding indebtedness; the loss or decline in business from our key customers; and other risks described from time to time in Equinix's filings with the Securities and Exchange Commission. In particular, see Equinix's recent quarterly and annual reports filed with the Securities and Exchange Commission, copies of which are available upon request from Equinix. Equinix does not assume any obligation to update the forward-looking information contained in this press release. Equinix and IBX are registered trademarks of Equinix, Inc. International Business Exchange is a trademark of Equinix, Inc. Prev
Equinix (EQIX) Stock Closed Lower, Cowen: 'Disciplined' Guidance Not a Concern
Equinix (EQIX) CFO Keith Taylor reiterated the company's 2016 revenue forecast, but used words such as 'disciplined,' which made investors nervous and sent the shares down.
Here's Why This Little-Known High-Income Tech REIT Is Red Hot Now
Equinix, an obscure tech investment structured as a REIT, offers a rare combination of capital appreciation and high income, with value to boot.
3 Stocks Advancing The Real Estate Industry | 科技 |
2016-40/3983/en_head.json.gz/10769 | 5 Things You Need to Know About E-Cigarettes
By LIZ NEPORENT (@lizzyfit) and GILLIAN MOHNEY (@gillianmohney)
The U.S. Food and Drug Administration announced this morning plans to regulate electronic cigarettes, requiring manufacturers to disclose product ingredients to the administration and put warning labels on the devices. However, there’s probably a lot you didn’t know about the controversial e-cigarette.
VIDEO: FDA Wants Warning Label on E-Cigarettes, Ban on Sales to Minors
For instance, e-cigarettes –- which now come in more colors than the iPhone 5C -– have been around since the 1960s. They’ve only started to take off in the last decade with more than 250 brands and flavors like watermelon, pink bubble gum and Java. An estimated 4 million Americans use them, according to the Tobacco Vapor Electronic Cigarette Association.
Click through for answers to more of your burning questions.
E-Cigarettes ExplainedWhat are e-cigarettes?
E-cigarettes are battery operated nicotine inhalers that consist of a rechargeable lithium battery, a cartridge called a cartomizer and an LED that lights up at the end when you puff on the e-cigarette to simulate the burn of a tobacco cigarette. The cartomizer is filled with an e-liquid that typically contains the chemical propylene glycol along with nicotine, flavoring and other additives.
The device works much like a miniature version of the smoke machines that operate behind rock bands. When you "vape" -- that's the term for puffing on an e-cig -- a heating element boils the e-liquid until it produces a vapor. A device creates the same amount of vapor no matter how hard you puff until the battery or e-liquid runs down.E-Cigarettes ExplainedHow much do they cost?
Starter kits usually run between $30 and $100. The estimated cost of replacement cartridges is about $600, compared with the more than $1,000 a year it costs to feed a pack-a-day tobacco cigarette habit, according to the Tobacco Vapor Electronic Cigarette Association. Discount coupons and promotional codes are available online.
Read more: E-Cigarette Sales to Hit $1 BillionE-Cigarettes ExplainedAre e-cigarettes regulated?
Until today, e-cigarettes were uncontrolled by the government despite a 2011 federal court case that gave the FDA the authority to regulate e-smokes under existing tobacco laws rather than as a medication or medical device, presumably because they deliver nicotine, which is derived from tobacco.
The agency had hinted it would begin regulating them this year, but its only action against the devices to date was a letter issued in 2010 to electronic cigarette distributors warning them to cease making various unsubstantiated marketing claims.
This has especially worried experts like Erika Seward, the assistant vice president of national advocacy for the American Lung Association.
"With e-cigarettes, we see a new product within the same industry -- tobacco -- using the same old tactics to glamorize their products," she said. "They use candy and fruit flavors to hook kids, they make implied health claims to encourage smokers to switch to their product instead of quitting all together, and they sponsor research to use that as a front for their claims."
Dr. Richard Besser, ABC News's chief health and medical editor, said public health officials have been concerned that e-cigarettes could be a gateway to further tobacco use.
"Data show use of e-cigarettes by high school and junior high school students is on the rise," Besser said. "Once addicted to nicotine, will users move on to using tobacco with all the inherent health risks?”
"Countering the view are those who view e-cigarettes as an important step towards risk reduction for current cigarette smokers," he added. "They do not deliver the carcinogens that are the cause of so many health problems."
Read More: E-Cigarette Explodes In Man's MouthE-Cigarettes ExplainedWhat are the health risks of vaping?
The jury is out. The phenomenon of vaping is so new that science has barely had a chance to catch up on questions of safety, but some initial small studies have begun to highlight the pros and cons.
The most widely publicized study into the safety of e-cigarettes was done when researchers analyzed two leading brands and concluded the devices did contain trace elements of hazardous compounds, including a chemical which is the main ingredient found in antifreeze. But Kiklas, whose brand of e-cigarettes were not included in the study, pointed out that the FDA report found nine contaminates versus the 11,000 contained in a tobacco cigarette and noted that the level of toxicity was shown to be far lower than those of tobacco cigarettes.
However, Seward said because e-cigarettes remain unregulated, it's impossible to draw conclusions about all the brands based on an analysis of two.
"To say they are all safe because a few have been shown to contain fewer toxins is troubling," she said. "We also don't know how harmful trace levels can be."
Thomas Glynn, the director of science and trends at the American Cancer Society, said there were always risks when one inhaled anything other than fresh, clean air, but he said there was a great likelihood that e-cigarettes would prove considerably less harmful than traditional smokes, at least in the short term.
"As for long-term effects, we don't know what happens when you breathe the vapor into the lungs regularly," Glynn said. "No one knows the answer to that."E-Cigarettes ExplainedDo e-cigarettes help tobacco smokers quit?
Because they preserve the hand-to-mouth ritual of smoking, Kiklas said e-cigarettes might help transform a smoker's harmful tobacco habits to a potentially less harmful e-smoking habit. As of yet, though, little evidence exists to support this theory.
In a first of its kind study published last fall in the medical journal Lancet, researchers compared e-cigarettes to nicotine patches and other smoking cessation methods and found them statistically comparable in helping smokers quit over a six-month period. For this reason, Glynn said he viewed the devices as promising though probably no magic bullet. For now, e-cigarette marketers can't tout their devices as a way to kick the habit without first submitting their products to the FDA as medical devices and proving that they work to help users quit. No company has done this.
Read More: Teen Use of E-Cigarettes On The Rise
Seward said many of her worries center on e-cigarettes being a gateway to smoking, given that many popular brands come in flavors and colors that seem designed to appeal to a younger generation of smokers.
"We're concerned about the potential for kids to start a lifetime of nicotine use by starting with e-cigarettes," she said.
Though the National Association of Attorneys General today called on the FDA to immediately regulate the sale and advertising of electronic cigarettes, there were no federal age restrictions to prevent kids from obtaining e-cigarettes. Most e-cigarette companies voluntarily do not sell to minors yet vaping among young people is on the rise.
A Centers for Disease Control and Prevention study found nearly 1.8 million young people had tried e-cigarettes and the number of U.S. middle and high school students e-smokers doubled between 2011 and 2012.
FDA Wants Warning Label on E-Cigarettes, Ban on Sales to Minors5 Things You Need to Know About E-CigarettesGovernment Crackdown on Electronic CigarettesFDA Wants Warning Label on E-Cigarettes, Ban on Sales to Minors'GMA' Investigates Liquid NicotineNew Recommendations for Former Smokers | 科技 |
2016-40/3983/en_head.json.gz/10817 | Night of the living xMac
"Just when I thought I was out, they pull me back in!"
This is the second post in my ongoing series pondering the Intel Macs of the future. In part one, I considered the professional line. Today I've got something on my mind that's a bit harder to pin down: the xMac.
Long-time readers are probably already familiar with the concept of the xMac. As far as I know, the term "xMac" was coined in the Mac Ach right here at Ars. I tried to find the exact post but, well, long-time readers also know what the forum search is like. The earliest post I could find that mentions the xMac was made by Jade on November 28, 2001, and it doesn't even explain the term. This leads me to believe that this post is not the origin of the xMac meme. Anyway, in a post made the next day, Jade adds this description: "$1000 xMac: gamer/burner/music machine."
The xMac saga continues on from there, across ninety-six pages of search results spanning almost four years. During that time, the xMac was tossed like leaf in the wind in the Mac Ach. The xMac is an iTunes device. It's a game machine. It's a sub-$1000 Mac. No, it's sub-$500. It's a floor wax, a dessert topping. The xMac is all things to all people!
Eventually, as will happen with any long-running forum topic, everyone pretty much forgot how the whole thing started. (Well, I did, anyway.) Discussion of the xMac continued, but each reader likely had a slightly different idea of what, exactly, that term meant.
Things finally came to a head with the run-up to the introduction of the Mac mini. A thread named Headless xMac confirmed! (Sub $500 Mac) chronicled the final days. But even then, there was debate about what would constitute vindication or defeat for proponents and opponents of the xMac idea—both its likelihood of becoming a reality and its intrinsic value as a product idea. In the end, it was generally accepted that the Mac mini was "close enough" to being the fabled xMac, and the matter was (blessedly, most would agree) closed.
Well, I'm bringing it back. As far as I'm concerned, it never left. My own personal conception of the xMac does not match-up well with the Mac mini. I'd go so far as to say that my xMac vision is also a lot closer to the original concept of the xMac...except I've already established that I'm not sure exactly where or when the xMac idea originated, so never mind.
I do think that all of the variations on the xMac idea have one important thing in common. It seems like everyone can agree that the xMac is "headless." That is, it has no integrated display like the iMac or a laptop. A corollary that is also widely agreed upon is that the Power Mac, as it currently exists, is definitely not the xMac. That leaves the consensus description of the xMac as "a headless Mac that is not the Power Mac."
By that definition, the Mac mini fits the bill. Technically, the Power Mac G4 Cube also qualified. Despite the name, it was definitely not like the Power Mac (i.e., a full-sized tower). But the Cube is not my xMac either.
Here's what I want. Start with a choice of two possible CPUs: the very fastest single CPU Apple sells, and the second-fastest. In contemporary terms, these would both be dual core CPUs. The internal expansion buses should also be top-of-the-line, but with less capacity than the Power Mac. One high-speed slot for the graphics card and at least one other, slower slot for another card would be fine. RAM capacity should be roughly half that of the Power Mac. There should be room for two internal hard drives and a single optical drive.
External expansion should be similar to the Power Mac, but with fewer of the "expensive" ports (e.g., FireWire 800) and more of the cheap ones (e.g., at least three USB ports). It should be wireless-capable, both long- and short-range. The short-range capability should be standard. (BlueTooth, in today's terms.)
Then there's the case. It should be much, much smaller than the Power Mac's full-sized tower. Think Shuttle-sized, but styled like a Mac. The Shuttles are nice, but they're shaped too much like shoe boxes for my taste. The xMac should be distinctive, like the mini. Maybe that means a pizza-box form factor, or maybe a skinny, upright mini-tower, I don't know. I trust that Ive can think of something.
There should be two pre-defined configurations. The first should use the "second-fastest" CPU, a less capable optical drive, a single, medium-sized hard drive, and a mainstream video card. The second should use the fastest CPU, the best optical drive, a large hard drive, and a high-end video card. The build-to-order (BTO) options must span the entire range for each item that can be configured: CPU, RAM, hard drive(s), video card, optical drive, and wireless.
So far, this xMac sounds like a product between the Power Mac and the iMac, which is probably a good idea. There's a bit of a hole there in Apple's line-up. But there's One More Thing...
You know those build-to-order options I mentioned? When I said that they should "span the entire range," I meant the entire range. No, I'm not talking about the high-end. Think in the other direction. Think of the top item in a BTO pop-up menu. Now picture this:
Ladies and gentlemen, I give you the xMac. My xMac. The Mac that I want to buy. Reduced to one sentence, it's a completely configurable, headless Mac that trades expandability for reduced size and cost.
Now, granted, the hypothetical Apple web store BTO form shown above is less appealing because it's presented in terms of today's technology. Who cares if the CPU is optional if you can't easily buy one from a third party? But picture it a few years from now with a choice of Intel CPUs in that pop-up menu. Suddenly, buying a CPU-less Mac starts to actually make sense.
Well, it makes sense from the perspective of a tech-savvy customer, anyway. Unfortunately, when considered from virtually any other perspective, the idea is a loser. Although it pains me to admit it, I never expect the xMac as I've described it to become a reality. Why is it such a bad idea? Let me count the ways.
Nearly all of the consumer benefits of this xMac come at the expense of Apple's profit margins. Every dollar I can save by only buying exactly what I need, and nothing more, is a dollar that Apple loses. Worse, all this configurability vastly increases Apple's inventory management challenges, makes reporting more difficult, and increases labor costs. Pre-defined, pre-packaged products are much easier for Apple to deal with, and much more profitable as well. Given all of this, what's Apple's motivation to produce such a product?
What about those consumer benefits? Happy customers are good for Apple in the long run, right? True, but will the xMac actually make customers happy? The computer geeks are in the bag, but what about everyone else?
Pity the poor "normal" customer given enough BTO configuration options to hang himself. All consumers want to feel like they're getting the best possible deal. To a computer geek, that means not being forced to buy anything unnecessary. But to everyone else, that same configurability has the opposite effect, causing worry and doubt. It's hard to know that you're getting the optimal combination of parts for your needs when you don't understand the effect each component has on overall system performance, and you don't know what a fair price is for each part.
But hey, didn't I specify two pre-defined configurations for the xMac? Won't the novices just choose one of those instead? Sure, maybe some will. But that nagging feeling that someone, somewhere is getting a better deal than you is very powerful. Eventually, it becomes an open secret that the way to get the best deal on a Mac is to find a computer geek and have him tell you the exact configuration of xMac you should buy. I'm sorry, but we've been there, done that, and it stinks. Apple is supposed to be about simplicity, not hassles—even self-inflicted hassles.
The two points above reinforce each other. As the xMac slowly becomes the obvious sweet spot in the product line-up, it begins to cannibalize the sales of the higher-margin Power Macs, hurting Apple's bottom line even more. And as more people find themselves shopping for an xMac because they heard it's the best deal, the formerly simple Apple buying experience becomes annoying and complex for an increasing proportion of Apple's customers.
As inadequate as computer geeks may find Apple's current, inflexibly BTO system, it's just about perfect for most of Apple's customers. People want some choice in order to feel like they're in control—even if they don't exercise it and end up buying a box-stock product from an Apple Store anyway. For those that do choose to build-to-order, the limited choices Apple offers are mostly understandable even to novices. "A bigger hard drive means you can store more stuff." "More memory means you can run more stuff at the same time." One or two simple choices are enough. More, and the customer is overwhelmed. Fewer, and they feel too confined.
Yes, it looks like my xMac will have to remain a fantasy. But I'd be happy with a compromise: a completely configurable headless Mac that trades expandability for reduced size and cost. Call it the Power Mac mini, make it cheaper and faster than at least one Power Mac model, and give the "deluxe" version the fastest available single CPU. That'd still cannibalize some Power Mac sales, but it'd also present an opportunity to up-sell iMac and (especially) Mac mini customers. It could still be a net win.
Finally, let's not forget about the elephant in the room. "If Apple didn't insist on restricting Mac OS X to run only on Apple hardware, this problem wouldn't exist in the first place!" Yes, that's true, but would an xMac by any other vendor be as sweet? Or make Apple as much money? Or dilute the brand by compromising the "Apple experience"? And on and on—this is another topic entirely, and one I don't want to dive into right now. Suffice it to say that I have a hard time seeing a happy ending to the perennial "Mac OS X on open hardware" scenario right now. But things change, so I'm keeping an open mind.
Anyway, back on topic. The Mac mini didn't close the book on the xMac for me, but the Power Mac mini might. I know this post was supposed to be about the Intel Macs of the future, but both my xMac idea and its compromised cousin, the Power Mac mini, are CPU-agnostic. Obviously, it's too late for either of them to arrive in the PowerPC era, so they're both Intel Mac ideas by default.
I'm sure would-be Mac builders everywhere would like every Mac to be as configurable as the xMac I've described. Sadly, I think even one such model is wishful thinking. The more interesting question is this: how many "regular" Mac users out there would be interested in a Power Mac mini?
I've always been a Power Mac buyer, but I've never actually filled one of those monster towers to the brim. In isolated cases, I've maxed-out some kinds of expansion (cough—Power Mac G5 internal hard drives—cough). But I don't think I've ever used more than one expansion slot (not counting the graphics card), and I'm more than willing to sacrifice some of my eight RAM slots (six filled) for reduced system size and cost.
Yes, I'd love to have an xMac, but I'd gladly settle for a Power Mac mini. How about you?
John Siracusa has a B.S. in Computer Engineering from Boston University. He has been a Mac user since 1984, a Unix geek since 1993, and is a professional web developer and freelance technology writer. Email [email protected]
Twitter @siracusa | 科技 |
2016-40/3983/en_head.json.gz/10818 | Ars Technica UK Uncategorized —
Congress considers bill to make radio “pay to play”
Terrestrial radio doesn't have to pay performance fees to play music, but that …
- Dec 20, 2007 4:11 am UTC
Radio has always had a strange exemption under US law: it doesn't need to pay the performers of the music it plays. Internet radio needs to pay. Satellite radio needs to pay. Digital music stations transmitted over cable lines have to pay. But not radio. The House and Senate are now considering matching bills that would remove this inconsistency by forcing terrestrial radio to pay up if it wants to keep playing music—and broadcasters are livid. While radio does pay a fee to songwriters, it pays no performance rights fee, in contrast to just about every other developed country on the planet. The broadcasters argue that they are providing free advertising to musicians, who then make money from touring and record sales. Plenty of artists don't buy this (especially older artists who don't tour or sell albums, but whose hits still keep oldies stations in business), and they can't see why radio is exempted from paying for the music it uses to rake in ad dollars. Tom Waits, one of the most innovative singer/songwriters of the last quarter century, helped to found the musicFIRST coalition that advocates for a performance fee. "It's just plain wrong for radio to be allowed to build profitable businesses with growing revenues on the backs of artists and musicians without paying them fairly for it," he said in a statement today. musicFIRST has the backing of some powerful members of Congress. Rep. Howard Berman (D-CA) and Rep. Darrell Issa (R-CA) had now introduced a performance rights bill in the House while Sen. Patrick Leahy (D-VT) and Sen. Orrin Hatch (R-UT) introduced the same bill in the Senate. The RIAA, of course, supports the plan. The current draft sets up a scheme where commercial broadcasters pay a flat yearly fee (set by the government) to a group like SoundExchange, which would distribute the money to artists and labels. Small commercial stations would only pay $5,000 a year, and nonprofit stations like NPR would pay only $1,000 a year. Talk radio and religious broadcasts would pay nothing. This has inspired apoplectic press releases from broadcasters and their supporters. The National Association of Broadcasters' Dennis Wharton played the xenophobia card. "After decades of Ebenezer Scrooge-like exploitation of countless artists, RIAA and the foreign-owned record labels are singing a new holiday jingle to offset their failing business model," he said. "NAB will aggressively oppose this brazen attempt to force America's hometown radio stations to subsidize companies that have profited enormously through the free promotion provided by radio airplay." The Free Radio Alliance, likewise, calls the move a "transfer tax on local communities." Spokesperson Cathy Rought also deplored the fact that the government would "fundamentally meddle with the established business model of one industry." (Though without government intervention, a free market would long ago have demanded these fees; the RIAA has wanted them for years.) Debate on the bills is likely to be fierce; the NAB issued a statement today pointing out that the competing Local Radio Freedom Act (which would keep the current system in place) currently has the backing of 127 members of the House. In a sign of just how nasty this could get, the NAB has already retaliated by asking Congress to look into the propriety of major label recording contracts. Nate Anderson | 科技 |
2016-40/3983/en_head.json.gz/11062 | You are hereHome » DOE Partners with Other Federal Agencies Working on the Wind River Indian Reservation DOE Partners with Other Federal Agencies Working on the Wind River Indian Reservation
What does this project do?Goal 1. Protect human health and the environment. On May 8 and 9, a joint federal agency collaboration was held to discuss financial and technical assistance to Wind River Tribes in Riverton, Wyoming. Requested by staff from the U.S. Department of Energy (DOE) Office of Legacy Management (LM), the meeting was held at the U.S. Environmental Protection Agency (EPA), Region 8 offices in Denver, Colorado. Other federal agencies represented were the Bureau of Indian Affairs, U.S. Department of Agriculture, and U.S. Geological Survey. Tribal representatives from the Northern Arapaho Business Council (NABC), the Joint Business Council (JBC), and the Wind River Environmental Quality Commission (WREQC) also participated in the meeting.
LM Site Manager, Bill Dam, presenting objectives and milestones.
The former processing site in Riverton is a Uranium Mill Tailings Radiation Control Act (UMTRCA) Title I site, licensed to LM for long-term surveillance and maintenance. The site is within the boundaries of the Wind River Indian Reservation shared by the Northern Arapaho and Eastern Shoshone Tribes. Each tribe has its own six-member, elected Council, and together, the twelve members comprise the JBC, which is tasked with the day to day activities of jointly owned resources and joint programs of the Tribes.
DOE currently has two cooperative agreements with the Wind River Tribes. A 5-year cooperative agreement with NABC provides potable water to the community via the Alternative Water Supply System (AWSS). The JBC–DOE cooperative agreement administered by WREQC provides oversight and outreach support.
Multiple federal agencies support the Wind River Tribes. In 2012, total federal expenditures for the joint programs exceeded $21 million. Personnel and equipment supplied by the supporting agencies are shared by the Tribes for administration of federal, and other, programs. The percent of resources used by each program must be accounted for by the recipients.
Matt Parker, DOE Office of Management (MA) Contracting Officer and Darryl Groves, MA Contract Specialist, provided an overview of the process and document requirements for cooperative agreements. Bill Dam, LM Riverton Site Manager, talked about DOE’s requirements for financial assistance, including technical evaluation of cost proposals.
An overview of federal funding standards that are common to all was delivered by Paul Felz, EPA Audit Coordinator, who also introduced the Cooperative Audit Resolution and Oversight Initiative (CAROI) document. CAROI was created to provide guidance and resolve audit findings of oversight issues through open dialogue, and to assist with the early detection of potential issues. The goal is to encourage communication and foster collaboration among all levels of government, allowing agencies to accept alternative documentation to support cost questions, while ensuring no harm to government interests.
Changes to federal cost standards under Title 2 Code of Federal Regulations Part 200—which were intended to reduce administrative burden, waste, fraud, and abuse—were discussed. The new rules and procedures provide for more straightforward internal controls.
To assist with our common goals, a dynamic web tool, MAX Information System, was introduced. MAX can be used by the Tribes to share and receive information with federal agencies. Administered by the U.S. Office of Management and Budget (OMB), MAX supports communication and collaboration between federal agencies and funding recipients. In addition to functions that help to meet documentation requirements for audits, MAX capabilities can be used for work plans and to streamline multiple technical projects at a site. Key benefits of the tool are transparency and early detection of issues before an audit is required.
LM is in the process of building pages within MAX and plans on using it to collaborate with other agencies. However, it was emphasized that MAX will not be required. JBC agreed to use the MAX Information System.
Representatives of each federal agency explained their respective agency’s relationship with the Wind River Tribes and gave a status of their current projects. Some of the initial outcomes include:
DOE efforts to provide contract administration guidance to the tribes; and
Changes to the payment process and payment system to improve adherence to financial assistance regulations.
Federal agencies will make every effort to be consistent and, where possible, consolidate application forms and other award documents. It was specified that recipients of federal funding are responsible for timely submission of documentation throughout the award cycle, and continuation awards are to be reviewed annually.
The meeting was a successful beginning to interaction among several federal agencies working on the Wind River Indian Reservation, for sharing requirements and processes that will facilitate awareness and avoid duplicative efforts. Advanced computer technology and future meetings at various venues will continue to strengthen partnerships.
EPA Audit Coordinator, Paul Felz, speaking about changes to federal cost standards.
News Release: DOE Announces Riverton Water Sampling Results Little Wind River Floods at Riverton, Wyoming: Study to Determine Impacts on Soil Contaminants News Release: DOE Hosting Public Meeting to Discuss the Riverton UMTRCA Site Careers & Internships | 科技 |
2016-40/3983/en_head.json.gz/11066 | The genus Protopterus is composed of the four species of lungfish native to Africa, and is the only genus in its family, Protopteridae. The largest of these species, the marbled lungfish (P. aethiopicus) can reach up to 200 cm long and has the largest vertebrate genome reported to date; it also has one of the largest genomes known from any living organism (along with the freshwater amoeboid Polychaos dubium and the Japanese plant Paris japonica). The Gilled African lungfish, P. amphibius, is the smallest lungfish in the world at about 44cm long. Besides the four Protopterus species, there exist two other species of lungfish; the South American lungfish (Lepidosiren paradoxa, family Lepidosirenidae) is in the same order (Lepidosirenifores) as Protopteridae, and the Australian lungfish (Neoceratodus forsteri, family Ceratodontidae), is the only extant species in order Ceratodontifores).The closest living relatives of tetrapods, lungfish often live in anoxic shallow swamps and ponds, which are likely to dry up in the dry season, so these fish have evolved as obligate air breathers and can endure long periods out of water, holed up in burrows in the dried mud. To breathe air, the lungfish’s air bladder has evolved into a “lung,” a highly vascularized pocket of the digestive tract, in which gulped air can be stored to oxygenate the blood that runs through this organ. Their heart is also adapted to pumping oxygenated and de-oxygenated blood in separate streams to different parts of the body. The lungfish ear is highly developed, much like the tetrapod ear, and adapted to hearing through air rather than water. Elongate and eel-like in appearance, African lungfish have soft scales and their pelvic fins are modified into long threadlike appendages which they can use to crawl along muddy surfaces. They are carnivorous, eating invertebrates, fish and amphibians. Lungfish are eaten by native Africans, although they have a strong taste, and are thus not widely enjoyed. Because of increased fishing pressure and conversion of breeding habitats to agriculture, populations of marbled lungfish are on the decline in Lake Victoria and Lake Nabugabo. (Christensen-Dalsgaard et al. 2011; Entsua-Mensah et al. 2010; Goudswaard et al. 2001; Wikipedia 2011a; Wikipedia 2011b)
Dana Campbell commented on an older version of Text:
@Bob Corrigan: Yes indeed!
Bob Corrigan commented on an older version of Text:
Pesky italics :)
Protopterus
Appears under "Brief Summary"
© Dana Campbell
Dana Campbell | 科技 |
2016-40/3983/en_head.json.gz/11098 | Data Sheet—Saturday, February 20, 2016
This is Jonathan Vanian, filling in for Robert Hackett while he is off.
The battle between the public and private sector over encryption technology kicked into warp speed this week.
On Tuesday, a federal judge in Riverside, California ordered Apple to build a custom version of its iOS operating system that can be installed into the iPhone of one of the shooters responsible for the December rampage killings in San Bernardino.
Because the data inside the shooter’s iPhone is encrypted, the FBI can’t simply retrieve the information it wants from the device’s memory chips. Instead, it needs the device to be unlocked with the appropriate PIN number.
However, Apple’s tough iPhone security measures make the process of guessing the phone’s PIN number a risky business. If the FBI enters the wrong PIN number too many times, the phone will permanently delete the stored data.
A special version of the iPhone operating system that would either bypass or remove that data-deletion feature would presumably make it easier for the FBI to crack the PIN number without fearing a total data wipeout.
Apple CEO Tim Cook was displeased with the court order and wrote a letter to customers in which he said the custom operating system is “too dangerous to create” because it circumvents the company’s security features.
Cook claims that the government is asking Apple to weaken the measures it takes to encrypt its data. To create the custom software would set a bad precedent that “would hurt only the well-meaning and law-abiding citizens who rely on companies like Apple to protect their data.”
That’s balderdash, the Department of Justice responded in the form of a court motion. The DOJ claimed that it won’t “require Apple to create or provide a ‘back door’ to every iPhone.” Apple’s public stance on the issue is only a “public brand marketing strategy.”
Now, representatives of the House Energy and Commerce Committee have invited Cook and FBI Director James Comey to appear at a yet-to-be-scheduled hearing to discuss encryption, a topic that will almost certainly be debated during the upcoming presidential elections. Additionally, the House Judiciary Committee reportedly asked Apple officials to testify at a similar hearing on March 1.
This chain of events presents a perfect storm to bring the topic of encryption to the public stage.
You have the world’s most valuable company, the U.S.’s leading criminal investigation and enforcement agency, and the controversial issues of terrorism, national security, and data privacy all intermingling.
Over the past few years, the topic of whether companies should ease up on encryption seemed to be of interest to only those deeply involved in the issue. Occasionally, there would be a mainstream news report on the issue. But generally speaking, the topic seemed to be of concern primarily to insiders or security conference attendees.
This time, considering the powerful players involved and its relation to a terrorist attack, the topic of encryption might stick around in the public forum.
Jonathan Vanian
@JonathanVanian
[email protected]
Welcome to the Cyber Saturday edition of Data Sheet, Fortune’s daily tech newsletter. Fortune reporter Robert Hackett is off for the week. You can reach him via Twitter, Cryptocat, Jabber, PGP encrypted email, or however you (securely) prefer. Feedback welcome.
Threat Sheet
by Jonathan Vanian
February 20, 2016, 2:50 PM EDT
← The Fear That Is Holding So Many Women Back
SeaWorld Shakes Up Leadership Team Amid Struggles → | 科技 |
2016-40/3983/en_head.json.gz/11165 | Items tagged with Amazon Instant Video
Amazon Instant Video App for iOS Now Supports Apple AirPlay
Amazon's Instant Video service may have started out as a side benefit to its Prime membership, but it's quickly becoming the main focus and a worthy contender to Netflix. The service is now home to over 40,000 movies and TV episodes available for unlimited streaming, and over 140,000 videos available to rent or purchase, which you can then view on your television or mobile device. If you're an iOS users, you'll be happy to know the service now supports AirPlay as well. That's one of the new features that was rolled into version 2.1 released this week. Other added/updated features included in the...
Move Over Netflix, Amazon Rolls Out Instant Video App for iPad
The streaming video wars are beginning to heat up. In addition to native Netflix and Hulu apps on Apple's iPad device, Amazon today announced that its Amazon Instant Video service is now available as a downloadable app for the world's most popular tablet device. Amazon's video app brings more than 20,000 titles from its Prime Instant Video catalog to the iPad, along with several TV shows like Glee, Downtown Abbey, and Fringe. "We want to give customers the convenience of being able to watch all of their movies and TV episodes, wherever they are, on their iPad," said Anthony Bay, Amazon.com vice... | 科技 |
2016-40/3983/en_head.json.gz/11216 | Don't Count On Facebook Boosting Your Brainpower Just Yet By editor
Oct 21, 2011 ShareTwitter Facebook Google+ Email If a high number of Facebook friends gives you a bigger brain, then CEO Mark Zuckerberg, seen here in Sept., must have one massive cortex.
Originally published on October 21, 2011 9:43 am A lot of people seem to be running wild with the idea that there is a direct, positive link between Facebook and the brain's grey matter. I want to believe a study that suggested Facebook can enhance the size of key parts of your brain. Really I do. But Facebook hasn't been proved to build a bigger brain just yet, and having a bigger brain wouldn't necessary mean you're better at making virtual friends either. The rush to credulity on this proposition may lie in some of the language used to describe the study; PR materials called the findings a "direct link." That tends to make me think cause and effect. But in this study, that hasn't been proved. University College London researchers merely found a correlation, not a direct cause, between the amount of grey matter in college students' brains and the number of friends they had on Facebook. The students with larger, friend networks happened to have more grey matter seen in brain scans. The researchers have emphasized the association and not causation in a statement and a briefing for reporters. Lead-author Ryota Kanai and his team looked at regions of the brain that have been known to correspond to social cognition: the amygdala, the right superior temporal sulcus, the left middle temporal gyrus and the entorhinal cortex. Grey matter, or the brain tissue responsible for processing, is found in these regions corresponding to memory, emotional response, perception, navigation and reading social cues. In other words, the researchers were looking in the places where social cognition occurs. When they compared the brain scans of 125 healthy university students to the number of online friends and real-life friends they had, those with more friends had more grey matter in the amygdala — a region already known to be larger in people with a larger network of real-world friends. They also saw more grey matter in the other three brain regions of people with a high number of online friends. The results were replicated in 40 more students. What do others think? "I'm cautiously optimistic about the relationship," Dr. James Fowler, who wasn't involved in the research, tells Shots. Fowler, a professor of medical genetics and political science at University of California, San Diego, investigates brain function and social networking. His research has shown that genes alone don't dictate social behavior. So what does the University College researchers' work have going for it?: They're looking in the right place. They did see more grey matter in the brains of people with more virtual friends. Variability in the grey matter occurs across individuals and populations. The brain changes as it matures. More grey matter doesn't necessarily mean better social networking. Further research needs to be done to directly connect the two factors. "Next they should do a functional study," Fowler says. "What happens when people are actually on Facebook? Is increasing the amount of engagement simultaneously causing our brains to change to make social interaction more enjoyable?" The findings appeared Wednesday in Proceedings of the Royal Society B.Copyright 2013 NPR. To see more, visit http://www.npr.org/. View the discussion thread. © 2016 KALW | 科技 |
2016-40/3983/en_head.json.gz/11312 | Mercury Transit of the Sun, Seen From Mars
This animated blink comparison shows five different versions of observations that NASA's Curiosity rover made about one hour apart while Mercury was passing in front of the sun on June 3, 2014. Two sunspots, each about the diameter of Earth, also appear in the images, moving much less during the hour than Mercury's movement.This is the first observation of any planet's transit of the sun observed from any planet other than Earth. It is also the first observation of Mercury from Mars. With precise information about when the transit would occur, the rover team planned this observation using the telephoto-lens (right-eye) camera of Curiosity's Mast Camera (Mastcam) instrument. The camera has solar filters for routine observations of the sun used for assessing the dustiness of the atmosphere. Mercury appears as a faint darkening that moves across the face of the sun. It is about one-sixth the size of a right-Mastcam pixel at the interplanetary distance from which these images were taken, so it does it does not appear as a distinct shape, but its position follows Mercury's known path. Each of the five versions of the image presented here blinks back and forth between two views recorded at different times during the transit. North is up. The version on the left is minimally enhanced, for a natural looking image of the sun with two sunspots barely visible. The second version has limb darkening removed, the edges masked. The third has enhanced contrast. The fourth has a line added to indicate the calculated path of Mercury during the transit. The fifth adds annotation to point out which spot is Mercury (in the cross hairs) and to identify two sunspots.For a video presentation of these images, see: http://www.jpl.nasa.gov/video/?id=1309 .Transits of the sun by Mercury and Venus, as seen from Earth, have significant history. Observations of Venus transits were used to measure the size of the solar system, and Mercury transits were used to measure the size of the sun.
Image Credit: NASA/JPL-Caltech/MSSS/Texas A&M
Browse Image | Medium Image | Full Res Image | 科技 |
2016-40/3983/en_head.json.gz/11386 | Algorithm could enable visible-light-based imaging for medical devices, autonomous vehicles Scientists identify neurons devoted to social memory Collaborating with community colleges to innovate educational technology Engaging industry in addressing climate change Making smarter decisions about classroom technologies From engineer to urban planner Professor Emeritus Ali Javan, inventor of the first gas laser, dies at 89 Q&A: How Twitter explains the 2016 election By Topic
David PritchardPhoto / Donna CoveneyFull Screen MIT researchers compare atomic masses with unprecedented accuracy
MIT atomic physicists have developed a technique that compares the masses of single charged atoms with unprecedented accuracy -- akin to measuring the distance between Boston and Los Angeles to within the width of a human hair.
The study, published in Science Express, reports the ratio of the masses of nitrogen and acetylene molecules with a precision below 1 part in 100 billion.
The work, led by David E. Pritchard, the Cecil and Ida Professor of Physics and a principal investigator in the MIT-Harvard Center for Ultracold Atoms, opens the door to numerous applications, including testing E=mc2 and weighing chemical bonds for weakly bound or very rare ionic species.
Pritchard, also affiliated with MIT's Research Laboratory for Electronics, and members of his group have been a leader in the field of high-precision mass spectrometry for more than 10 years. They have developed techniques to trap and detect a single charged atom, known as an ion, for more than a month at a time. They have used this method to publish the atomic masses of 13 different atoms ranging from hydrogen to cesium with an uncertainty of around 1 part in 10 billion.
Atomic mass is measured by comparing the rates at which different molecular ions orbit magnetic field lines in a magnetic trap. The precision of this widely used technique was limited by changes that occurred in the magnetic field during the minutes required to switch the two ions being compared. The MIT laboratory had its own special challenge: magnetic field variations caused by a nearby subway line. The group was forced to do all measurements between 1:30 and 5:30 a.m., when the subway and elevators in their building were shut down.
In these recent experiments, the Pritchard group for the first time put two ions in the trap at the same time. Previously, this generated problems when the two ions came too close together and generated bothersome electrostatic interactions. The researchers overcame this obstacle by placing the ions 1 mm apart in a common circular orbit. In this configuration, the ions in the trap are like a waltzing couple.
"They spin around on the dance floor, always a fixed distance from each other," said Simon Rainville, the first author of the paper and a postdoctoral fellow at Harvard. The researchers then took advantage of the coupled motion to monitor and control the trajectories of the ions in the trap.
The new technique, akin to using a weight-balanced scale like those once used for meat or produce, dramatically increases the precision with which atomic masses can be measured. And thanks to a new highly automated computer system, masses are measured in the MIT lab 24 hours a day.
The field has advanced significantly since the 19th century, when Italian chemist Amadeo Avogadro first observed that gases at the same temperature and pressure combined in definite volume ratios, and equal volumes of the gases had the same number of molecules. By weighing the volumes of gases, he could determine the ratios of their atomic masses.
In the early 20th century, Pritchard noted, mass comparisons of atomic species had a precision of around 1 part in 1,000, and when he started working in the field, the state of the art was a few parts in 100 million. Today, the precision has reached several parts in a trillion. "In a logarithmic sense, we've made nearly as much progress as in the entire previous history of mass spectrometry," he said.
In addition to Pritchard and Rainville, authors include James Thompson, a postdoctoral researcher at MIT.
This work is supported by the National Science Foundation.
Topics: Physics
Science Express MIT-Harvard Center for Ultracold AtomsDavid Pritchard - MIT Physics Dept. About This Website | 科技 |
2016-40/3983/en_head.json.gz/11513 | "The idea of deep time ... explains so much of the world around us," Bill Nye said in the viral video. August 31st, 2012 04:34 PM ET
Creationists hit back at Bill Nye with their own video By Eric Marrapodi, CNN Belief Blog Co-Editor
Follow @EricCNNBelief
(CNN) - Bill Nye's viral YouTube video pleading with parents not to teach their children to deny evolution has spawned an online life of its own, with prominent creationists hitting back against the popular TV host.
"Time is Nye for a Rebuttal," Ken Ham the CEO of Answers in Genesis writes on his website. Answers in Genesis is the Christian ministry behind the Creation Museum in Petersburg, Kentucky.
Nye's criticism of creationism went viral earlier this week, after being posted last Thursday.
"I say to the grownups, if you want to deny evolution and live in your world, that's completely inconsistent with the world we observe, that's fine. But don't make your kids do it," Nye says in his Big Think video, which has been viewed nearly 3 million times.
Ham writes that Nye is joining in with other evolutionists who say teaching children to deny evolution is a form of "child abuse." That idea comes in part from the atheist scientist Richard Dawkins, who in his book "The God Delusion" argues against exposing children to religion before they are old enough to fully understand it.
"At AiG and the Creation Museum, we teach children and adults the truth concerning who they are in the Creator’s eyes — and where they came from," Ham writes. "We tell people that they do have purpose and meaning in life and that they were created for a purpose. "No, we are not just evolved animals as Nye believes; we are all made in the image of God."
Ham is the public face of a group that academics call Young Earth Creationists, though they prefer to be called Biblical Creationists. They believe in a literal interpretation of the creation account in the book of Genesis found in the Bible.
The Creation Museum also produced its own rebuttal video on YouTube that features two of their staff scientists, both Ph.Ds, David Menton and Georgia Purdom.
"[Nye] might be interested to know I also teach my young daughter about evolution and I know many Christian parents who do the same," Purdom says in the video. "Children should be exposed to both ideas concerning our past."
For the past 30 years, one popular method for Creationists to advance their cause has been to make an equal-time argument,with Creationism taught alongside evolution. In the late 1980s, some state legislatures passed bills that promoted the idea of a balanced treatment of both ideas in the classroom.
In 1987, the issue made it all the way to the Supreme Court, where a Louisiana "equal-time law" was struck down. The court ruled that teaching creationism in public school class rooms was a violation of the Establishment Cause in the Constitution, which is commonly referred to as the separation of church and state.
A key point between most scientists and many creationists is the timing for the origin of the world.
Your Take: 5 reactions to Bill Nye's creationism critique
Nye's argument falls in line with the vast majority of scientists, who date the age of the earth as 4.5 billion years old and the universe as 14.5 billion years old.
"The idea of deep time of billions of years explains so much of the world around us. If you try to ignore that, your worldview becomes crazy, untenable, itself inconsistent," Nye says in his viral video.
Young Earth Creationists say the weeklong account of God creating the earth and everything in it represents six 24-hour periods (plus one day of rest) and date the age of the earth between 6,000 and 10,000 years.
"Yes we see fossils and distant stars, but the history on how they got there really depends on our worldview," Purdom says in the museum's rebuttal. "Do we start with man's ideas, who wasn't here during man's supposed billions of years of earth history or do we start with the Bible, the written revelation of the eyewitness account of the eternal God who created it all?"
Polling from Gallup has shown for the past 30 years that between 40-46% of the survey respondents believe in Creationism, that God created humans and the world in the past 10,000 years.
The most recent poll showed belief in atheistic evolution was on the rise at 16%, nearly double what it had been in previous years. The poll also found 32% of respondents believe in evolution guided by God.
Eric Marrapodi - CNN Belief Blog Co-Editor
Filed under: Belief • Christianity • Creationism • Science
Next entry »Cardinal Carlo Maria Martini, prominent Vatican figure, dies at 85 « Previous entryBill Nye on creationism: It's like teaching the earth is flat soundoff (5,973 Responses)
46% of the people in this country believe that the earth is 6,000 years old? And we are wondering why we are in so much trouble? I think we are moving in the wrong direction…
September 1, 2012 at 12:33 pm | Tom, Tom, the Piper's Son
I hope the 46% are very, very old and will soon be gone.
September 1, 2012 at 12:38 pm | Ronald Regonzo
What does the age of the earth have to do with Obamas incompetence? The earth will be 6000 years old when President Romney says it is. Romney/ Ryan 2012
September 1, 2012 at 12:41 pm | Halkes
Wow, check out this guy ^^^
September 1, 2012 at 12:44 pm | Elena
tom tom the full not even know what a particle wave probality is and yet he sends people to read science, Lol
September 1, 2012 at 12:50 pm | Terry
The worst of it is : They have the right to vote... Let me see...could we correlate those 40% + with voting intentions ? I would bet my bottom dollar that we would find a trend towards the GOP... One party does seem to unite all the dimwits together...
Why, Oh why does a Bible litteralist always seem to be a dimwit,once prodded a bit...
September 1, 2012 at 1:12 pm | HumanityHater
LOL it IS child abuse. Teaching mythology as fact screws people up. It destroys critical thinking skills (though religious people want critical thinking destroyed) And there is nothing equal about creationism and evolution. One is based on our observations of the world and evidence. The other is completely made up by medieval fantasy writers. Creationism is religion and nothing more. In the USA you can practice any religion you want with freedom, but it can't be part of government. Public education is government. God and science can actually work together. A god may have caused the big bang that formed the universe, galaxies (not mentioned in the bible at all) and they allowed for human evolution and the evolution of all other animals on the planet.
But science and silly religion are not, and will never be compatible.
September 1, 2012 at 12:31 pm | TheVocalAtheist
So therefore God and science can never be compatible, right?
September 1, 2012 at 12:34 pm | Chip Fields
Name one thing Creationism teaches that is made up and is completely falsifiable.
September 1, 2012 at 12:37 pm | Moby Schtick
@Chip
You don't seem to understand what "falsifiable" means. It's a good thing. If a specific theory has a falsifiable/verifiable hypothesis that we can test for, we can determine if the theory is correct on that specific point or not. It ADDS knowledge. Science, essentially, is about measurement. Creationism doesn't allow for measurement of any specific hypotheses and as such, is merely an a'ssumption so you might as well a'ssume anything you want in place of that original belief.
September 1, 2012 at 12:42 pm | Blue Dog
@Chip I can't believe you asked the question. How do you disprove the fact that age of Earth can be accurately measured to more than 4 billions years. How can you disprove that there are fossils found all over the earth whose age is more than millions years old? How can you disprove that DNA analysis of human and chimps DNA has now proved that evolution is real?
September 1, 2012 at 1:50 pm | Universalist
Ok, who did God talk to today? Fess up.
September 1, 2012 at 12:31 pm | Al
Science guy thinks he is so smart with all his facts. He doesn't realize that anything born in antiquity is just as good as a fact to the majority.
And do you think that is right?
Yes, a book with some parts almost 3,000 years old can't be wrong. The ancient authors were much more intelligent than humans today. Why is this so hard to understand? What is wrong with you people?
September 1, 2012 at 12:37 pm | Will Farouti
Yo, Al. Here's the sarcasm tag you forgot:
"The ancient authors were much more intelligent than humans today."
Please cite your sources for this claim.
September 1, 2012 at 12:43 pm | Dean
After reading remarks in here it is obvious that ancient authors were much more intelligent than people of today.
Al and Dean
It is people that think like you that really put a damper on progress. I bet you both would have loved to live back in the Dark Ages, right?
Vocal Atheist,
Read Will's comment.
September 1, 2012 at 12:50 pm | Reality
Only for the new members of this blog: ( TOPIC INFORMATION THAT EVERYONE SHOULD BE AWARE WITH RESPECT TO THEIR OWN EVOLUTION)
Are you part Neanderthal? Read below. (this is no joke)
Besides the dinosaurs and other fossils in our evolutionary process:
You might be part Neaderthal and for $99 actually find out:
As per National Geographic's Genographic project:
https://www3.nationalgeographic.com/genographic/ " DNA studies suggest that all humans today descend from a group of African ancestors who about 60,000 years ago began a remarkable journey. Follow the journey from them to you as written in your genes”.
"Adam" is the common male ancestor of every living man. He lived in Africa some 60,000 years ago, which means that all humans lived in Africa at least at that time. Unlike his Biblical namesake, this Adam was not the only man alive in his era. Rather, he is unique because his descendents are the only ones to survive. It is important to note that Adam does not literally represent the first human. He is the coalescence point of all the genetic diversity."
For your $99 and a DNA swab:
"Included in the markers we will test for is a subset that scientists have recently determined to be from our hominin cousins, Neanderthals and the newly discovered Denisovans, who split from our lineage around 500,000 years ago. As modern humans were first migrating out of Africa more than 60,000 years ago, Neanderthals and Denisovans were still alive and well in Eurasia. It seems that our ancestors met, leaving a small genetic trace of these ancient relatives in our DNA. With Geno 2.0, you will learn if you have any Neanderthal or Denisovan DNA in your genome."
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Religion teaches us to have blind faith without facts.
Science teaches us to have facts without blind faith.
actually, we all see the same evidence, we just interpret it differently based upon our presuppositions (and, yes, we all have presuppositions)
September 1, 2012 at 12:33 pm | donna
No Chip, we don't all see the same evidence. These parents aren't reading evolutionary research to their kids.
September 1, 2012 at 2:34 pm | Erik
How is saying we were "created in the image of God" any more believable than the idea our "souls" are a result of an intergalactic war between space aliens, as Scientologists believe? Neither view has any evidence to back it up other than the written word of certain individuals.
September 1, 2012 at 12:27 pm | RONALD DIBERTO
in truth our souls have been proven to exist in the past. but have been forced down by the central science academies.
http://en.wikipedia.org/wiki/Duncan_MacDougall_%28doctor%29
these test continue, and have been proven correct. we all lose 21 grams at the point of death. no matter your age, weight or size.
Hey Ronnie!
"His results have never been reproduced, and are generally regarded either as meaningless or considered to have had little if any scientific merit. [1][2] Nonetheless, MacDougall's finding that the human soul weighed 21 grams has become a meme in the public consciousness, mostly due to its claiming the ti*tular thesis in the 2003 film 21 Grams."
September 1, 2012 at 12:41 pm | The rev
I'm a degreed scientist and a degreed theologian. There are two perfectly good ways to explain creation which are both true but seen through two different lenses. One is the truth through scripture, which is theology and metaphor. The other is the truth through science which is measurable and observable fact. They are two completely different experiences of reality and cannot be mixed or jumbled together as the Creationists try to do. Creationists dishonor both God and science and only provide good amunition for athiests who like to poke fun at people with faith.
September 1, 2012 at 12:28 pm | Sad
For some reason we want to know HOW processes work. Science helps us in this endeavor.
For some reason we want to have an individual place in a meaningful narrative. Myth helps us in this endeavor. (I intend the word "myth" in the philosophical and literary sense, not as some derogatory or pejorative term)
tom tom you agree with the poster comment and you not even understood his point? lol how fullish you are
are you physicist? can you explain this for me?
I would like to hear from scientist an explanation as to how is it that light by pure chance acquired the ability to carry information, ans how is it that by pure change the optic nerves develop by pure chance the ability to transform the light it receives in electromagnetic impulses and more incredible maintaining the information carried in that light intact and convey it to the particles of the brain atoms that in turn even more amazingly by pure chance develop the ability to interpret that information and transform it in to images colours and more.
even more astonishing same goes for all other senses where information is carried in waves to the brain and transformed into an experience by what we called mind?
How could that all have been by pure chance be so tuned to give rises to our living experiences?
When you figure out that the word is "foolish", as has been pointed out for you previously, Elena, you can go on and tell me what makes you imagine that I don't understand the point the poster made. I can hardly wait.
Your requests don't even make sense, Elena. Who is using the words "pure chance" to explain how species evolved?
September 1, 2012 at 1:02 pm | agathokles
@Elena: You ask your question about how light came to carry information as though our inability to answer it disproves evolution. It does not. I feel perfectly fine about invoking God as creator of all - billions of years ago - and about invoking evolution to explain how we got to where we are today. The major dispute is not between believers in God and believers in evolution; it's between believers in evolution and those who insist upon a literal interpretation of the Bible.
September 1, 2012 at 1:13 pm | RONALD DIBERTO
This will settle everything. There is a forgotten term that was created by myself many years ago while in the lab.
Creational Evolution. This is the state when God in his divine wisdom created everything and allowed it to grow and mature through evolution. Since there is no other way to explain the Big Bang Theory other than an outside source started the process, let's just say God flicked the switch on our creation. The Christian extremes will look upon this as heresy. But Adam and Eve were not the first people on earth. They were the first Faithful. Evolved from ape like creatures over millions of years. And the earth was created in 6 days. but 6 days according to someone who lives with eternity. So a day to god could be millions or billions of years. So please stop taking the early scriptures in a literal sense. It does not work.
September 1, 2012 at 12:26 pm | HumanityHater
Most loonies refuse such a reasonable explanation. They want it to be literal literal literal. And thus they are lunatics!
September 1, 2012 at 12:43 pm | Atheism is not healthy for children and other living things
Prayer changes things ,
September 1, 2012 at 12:24 pm | jl
Whatever.– one mans opinion does not run the universe. Next! We can think what we want evolution = no accountability therefore" I am a god " just a deception to feed the minds of the lost. Lucifer is trying to get his trophies for eternity as usual. Job 38:4 Where were YOU when I stretched forth the heavens like a curtain...."
September 1, 2012 at 12:22 pm | midwest rail
Your assertion is flawed and untrue.
September 1, 2012 at 12:24 pm | nope
@midwest...
Scintillating reply.... ( eye roll )
September 1, 2012 at 12:27 pm | Universalist
What if God is a carpet. Does that mean if i get rug burns God is trying to talk to me?
September 1, 2012 at 12:41 pm | Imagine No Religion
Bill Nye is the man! We need more scientists to step up and expose these frauds. Creationists need to crawl back under whatever rock they slithered out from under 6000 years ago. Fantasy belongs in comic books, not science books.
Ken Ham is an idiot! Period.
"There ain't no jesus gonna come from the sky.
Now that I found out, I know I can cry." – John Lennon
September 1, 2012 at 12:22 pm | truth be told
Nye isn't a scientist, he is the host of a kids tv show. Do you get all your thinking done for you by pee wee herman?
He's more of a scientist than you'll ever be, TuBeTop.
September 1, 2012 at 12:25 pm | chris hitchens
bill nye is tom toms favoooorite science guy. lil tom tom just sits in front of the tv with its thumb in its mouth in a puddle of pi ss until billy says goodnight
Oooh, guess my comments left a mark on poor widdle chrissy.
September 1, 2012 at 12:30 pm | Jhamilton918
@Truth be told He holds a mechanical engineering degree from Cornell, where he was taught under Carl Sagan and two more Honorary doctorates. He also worked for Boeing as a consultant for many years. He IS a scientist.
TBT and chrissie are too busy kissing each other's butts to read your comments.
September 1, 2012 at 12:33 pm | peakprofit
Bill Nye has a Mechanical Engineering degree from Cornell. I think that qualifies as a scientist enough.
How is the GED working out for you?
September 1, 2012 at 12:41 pm | Mcr
Bill Nye is a mechanical engineer. So yes, he is a scientist, he's just more of a practical scientist. I'm sure he can't compare to your dual chem/bio PhDs though
Apparently the only mark Tom Tom has left is a nasty smelling stain on a rug in front of its tv.
Well, Truth be Twisted, exactly where did I say Bill Nye is a scientist? All I said was more scientists need to stand up against the c rap spewed by Ken Ham and his delusional followers. Seems to me that your reading comprehension skills are on the level of the average Pee Wee Herman viewer.
But since you brought it up, Mr Nye has a Bachelor of Science degree from Cornell University. He is also a reputable and renowned science educator, and the Executive Director of The Planetary Society.. Read (if you can) more at http://en.wikipedia.org/wiki/Bill_Nye and educate yourself.
Mr Nye's science credentials are solid, as opposed to Ken Ham (President – The Snake-Oil Petroleum Company).
Poor TuBeTop, busted again.
Hey, TuBeTop, why don't you tell everyone here what your educational credentials are? That'll be the shortest post here.
@tom tom
nope ,
September 1, 2012 at 12:49 pm | Johnny
If we were created in gods perfect image then his creation is flawed, what god would create a being and have it breathe out of the same orifice that we eat out of, kind of increases your chances of choking and dying quite a bit huh? just saying.
God's spiritual image, not physical image.
Dean, where in your holey book does it say "spiritual image, not physical image."
Oh, you made that up. Thought so.
Mankind is frighteningly stupid, given the tools at our disposal.
There are two of us!
September 1, 2012 at 12:36 pm | Brock
I suppose you folks think that Santa and the Easter bunny is fact?
what makes you think that creationist are so stupid to believe such thing? I am sure that instead of telling your love ones, " Love you with all my heart, you tell them I love you with all my brain? lol
The fact that you can't seem to add an 's' to a word to express a plural is what makes me think you're stupid, Elena.
there are no other stupid than those who use violence and insults as arguments? just because English isn't my first language means i am stupid, your arrogance its the sign of you stupidity. otherwise north koreans and the pakistanies who do not speak English wouldn't build nuclear bombs, and i I feel bad for Aristotle who didn't speak English therefore he was stupid! you the perfect arrogant full who thinks that grammar makes people intelligent!
September 1, 2012 at 12:32 pm | Max
No – most Christians do not believe Santa and the Easter Bunny are real. But, they do believe that Jesus was a real person in history. There is, despite that the Atheists would like to make you believe, some recording in ancient history of a man Jesus. Atheists like to say that Christians added that later – but if you do some real research – you won't agree with that. I would suggest some books for you to read, written by a Scholar and professor at Oregon State University named Marcus Borg. It might give you something from a different aspect to think about. Just google him, and you will find a list of his books.
Atheist that make fun of people who believe in Christianity are a bit immature. At least some of us who call themselves Christian don't make fun of atheists. We debate – but usually not name calling.
Your arguments wouldn't hold water even if you expressed them in perfect English, Elena.
If they weren't fact you would not be referring to them.
If Jesus was not a person how could a fantasy make millions of followers called christians just like 30 years after his dead? Do you forget that real history tells us about the burning of Rome in the year 61 AD and the Neron blamed the Christians. and that Tacitus mentioned Jesus in his Annals book 15 chapter 45 if im not mistaken? did you even know who Tacitus was?
Jesus may very well have existed; that doesn't prove divinity.
September 1, 2012 at 12:39 pm | OTOH
Tacitus talked about Hercules in Germania. Chapter 3. That must mean he was/is real too, eh?
These properties aren't locked to a specific use, but can be exploited whoever we can and choose to use them. Whether a rock is smooth or rough can be used as well tactically.
September 1, 2012 at 12:24 pm | tom-ay
why not? i'd prefer to think aliens did it. does that make me the faithful or the unfaithful? and who cares? we are all on this planet to form a symbiosis with it, we don't do a very good job of that...
what? ???
September 1, 2012 at 12:33 pm | Thomas
A scientist could explain it to you, but you wouldn't understand it.
Why would I not understand it, do you have the explanation? explain it to me? it is not me saying that particles only exist as a wave of possibilities till they are observed?
Elena, what point would there be in explaining it to you in a language in which you aren't fluent? Why don't you pick up a science book written in your first language and read it?
Your a dimwitt of unbelievable proportion.
September 1, 2012 at 12:45 pm | agathokles
What?? Light carried information long before any life was around to receive/interpret it. That's physics, not biology. But when it comes to the development of "vision," well it's not a stretch to imagine an evolutionary path between bacteria or algae that have photoreceptors (to tell them which way to swim towards light) and humans. I think most folks who cannot imagine evolution are folks who know so little about biology that they're essentially lost causes.
agathokles, you are telling facts but I want to know how did light acquired the ability to carry information and how those one celled beings acquired photoreceptor completely tuned to received the information conveyed in light, and further more, if light carried information to give instructions to those beings where do those instruction came from
September 1, 2012 at 1:01 pm | Terry
Pure chance...don't like that, hey ..?..Of course, a little training in probabilities would probably give you the beginning of decent answers, but you would have to study for that, would you not ?..Pure chance has nothing to do with all we see around us... Actually, even probabilities have little to do,except that we know the probability range was tiny indeed...
Hang on to your brain : Things are what they are because we are here. It's a closed loop..If things were not what they are, we would not be here ( as a specie)..If we were not here, things would be different ( but we could not tell, could we..?).
So it might as well be God sacred word that did it all.. Except that the issue is not about "who done it", but Time !.. Is it inconceivable to think that God spoke a word ( or two, or read a book...) and then let things evolve according to his plan.. That would reconcile his existence with free will, at the very least.
Choosing to believe that Time is not a factor ( as in the time necessary for evolution to do its thing) is such a narrow view of the world that one has to wonder at the intellectual capacity of 40% + of the US population: The fact that this 40% + votes is the real fright...
September 1, 2012 at 1:05 pm | donna
Who told you that scientists thought that was all a result of pure chance? You aren't going to get a scientific explanation for that, because the evidence doesn't tell us that it was pure chance.
September 1, 2012 at 2:37 pm | Tom, Tom, the Piper's Son
Elena is stuck on her own beliefs and simply refuses to accept any evidence to the contrary.
September 1, 2012 at 2:40 pm | jimmyleetexas
Creationists have built an alternative reality that ignores science in favor of theology. They would prefer to raise narrow-minded ignorant children who believe in fairy tales than to have them be productive in the real world. Fundamentalists are cut from the same cloth as the Taliban, just different language and clothes.
Too many folks equate belief in evolution with disbelief in God. That's nonsense. I believe in God; I believe God 'created' the universe. I just don't believe in a literal interpretation of the Bible. What is nonsense, is believing - in the face of all the evidence to the contrary - that the earth is 10,000 years old. "Genesis," like other books of the Bible, is not to be taken literally. It's a creation myth. Look, if you're going to be taking the Bible literally, your head will explode. There are inconsistencies in it, all over the place. There are two conflicting creation stories, for example. Different stories of Jesus' birth, as another example. Literalists contort themselves into pretzels, attempting to reconcile irreconcilable passages in the Bible. I follow official Roman Catholic teaching on the Bible: That the Bible is without error only in those elements having to do with our salvation. The Bible is "truth," not "fact." When Jesus tells the story of the workers in the vineyard, you're missing the point if you start wondering where that vineyard was. It's a story illustrating a greater truth about human nature. It does not necessarily have to have actually taken place. When you read the Bible, ask merely, "What inspired truth was the author trying to convey, here?"
Bottom line: One can believe in God without having to reject evolution. Reject literal interpretation of the Bible.
September 1, 2012 at 12:16 pm | Alban Saunders
After 60 years of study, I have never found a single of these "all over the place" inconsistencies". They live only in the minds of casual, lazy scholars who fear that the truths of the Bible will expose their own sin and they will have to face their ultimate eternal destination.
@Alban Saunders: You've go to be kidding! Heck, we can start with Genesis. As Francis Collins says, "Science can't be put together with a literalist interpretation of Genesis," he continues. "For one thing, there are two different versions of the creation story" — in Genesis 1 and 2 — "so right from the start, you're already in trouble." Christians should think of Genesis "not as a book about science but about the nature of God and the nature of humans," Collins believes. "Evolution gives us the 'how,' but we need the Bible to understand the 'why' of our creation."
Read more: http://www.time.com/time/nation/article/0,8599,1895284,00.html#ixzz25EqJhB49
September 1, 2012 at 1:01 pm | budshot
Geez, but this country is dumb, with all those nutjobs believing in myths and fantasy. No wonder we're falling behind.
September 1, 2012 at 12:14 pm | The Son At Dawn
It's more likely we're falling behind because we keep thinking that things are myths and fantasy. The frustration building between G-d and science comes from the fact that there is a Creator and He uses science. He just has a much better grasp on it since it is His creation. But from our perspective there is no G-d, and there is only science, exactly the way it was intended. If I was a creator (little c) I would not want my SimCity citizens to know that I was there, it would ruin the whole thing. But if you try to focus on thinking bigger than just yourself the pieces suddenly come together. If you can do this and act on your changing perception the Creator might just wave at you from behind the curtain and show you His version of 'Hollywood Magic'.
There is a God? How do you know? You don't know anything of the sort. You BELIEVE. There's a difference.
September 1, 2012 at 12:42 pm | John Blackadder
You can fool some of the people all of the time!
September 1, 2012 at 12:13 pm | « Previous
Next » Next entry »Cardinal Carlo Maria Martini, prominent Vatican figure, dies at 85 « Previous entryBill Nye on creationism: It's like teaching the earth is flat About this blog | 科技 |
2016-40/3983/en_head.json.gz/11556 | Will Google Finally Lose Its Grip on Search in 2013?
by Mike Wheatley | Jan 1, 2013 | 0 comments
As far as internet search goes Google heads into 2013 looking as dominant as it’s ever been, controlling a massive 84.5% share of the global market according to some sources, including a solid 67% market share in the US.
But will we be able to say the same thing in twelve months’ time? According to one observer, Google is going to need to be extra vigilant if it wants to retain its status as the internet’s top dog.
In an interview with Yahoo News last week, the Economist’s chief analyst Daniel Franklin says that when companies reach the top of their game, that’s when they need to start looking round their shoulders.
“That’s when technology companies need to be worried because there’s always the next start-up that comes along and challenges their preeminence,” he said in the interview.
Franklin cites growing concerns over privacy issues that could potentially lead to consumers dropping them as their search engine of choice.
“Google could find that people are worried about privacy issues,” adds Franklin. “They’re worried about the tracking issue.”
Franklin points to the growing awareness of Google’s questionable privacy practices, which have led to several major investigations being launched against the company. Google is currently under investigation by France’s data-protection watchdogs, while in the US it was hit with a $23 million fine by the Federal Trade Commission for illegally tracking users of the Safari browser. Meanwhile, the UK is also reportedly planning to introduce a new communications bill that would limit Google’s ability to perform data mining.
The issues over privacy and also Google’s perceived bias are very real concerns, and some netizens – albeit very few at this time – are beginning to take notice and switch to alternatives. Last November, Google’s market share in the UK dipped below 90% for the first time in five years, while Bing has been steadily increasing its own slice of the action in the US for the past few years.
RELATED: Hallelujah: Google to punish sites that use interstitial pop-ups on mobile content starting JanuaryThere are also new rivals on the horizon promising a ‘cleaner’ search experience than what traditional search algorithms provide. One of these, DuckDuckGo, doesn’t use any cookies to track its users so it will never bombard them with targeted advertising or compromise their privacy in any way. Another alternative, Blekko, promises ‘spam-free’ search results, only linking to those sites that have been ‘verfied’ by a human moderator. In both cases, these new competitors in the search arena have slowly but surely seen an increase in traffic.
Google faces an interesting year ahead, where it will have to carefully balance its data policies and other practices against an increasing number of investigations into it, while monitoring the competition as it seeks to steal Google’s market share.
“I’m not saying Google will find itself vulnerable,” says Franklin. “But these things can change very fast so Google has to be concerned and they will be watching all these things very closely I imagine.”
About Latest Posts Mike WheatleyMike Wheatley is a senior staff writer at SiliconANGLE. He loves to write about Big Data and the Internet of Things, and explore how these technologies are evolving and helping businesses to become more agile. Before joining SiliconANGLE, Mike was an editor at Argophilia Travel News, an occassional contributer to The Epoch Times, and has also dabbled in SEO and social media marketing. He usually bases himself in Bangkok, Thailand, though he can often be found roaming through the jungles or chilling on a beach. Got a news story or tip? Email [email protected]. Latest posts by Mike Wheatley (see all) Report: Qualcomm wants to buy NXP Semiconductors for $30B - September 29, 2016 IBM brings AI to banking with Promontory Financial Group acquisition - September 29, 2016 Salesforce moves to block Microsoft’s ‘anticompetitive’ acquisition of LinkedIn - September 29, 2016 Name* | 科技 |
2016-40/3983/en_head.json.gz/11618 | Facebook app for iOS gets voice and video recording, voice message support and improved Nearby tab
The Facebook app for iOS has gotten the ability to record voice and video right in the app, as well as Messenger’s voice messaging support and an improved Nearby tab. Voice messages originally showed up in the iOS and Android versions of Facebook’s Messenger apps, and later VoIP calling was added to those apps in the US and Canada. Now, the voice messaging capabilities have come directly to the main Facebook apps for iOS as well. Sending a message in the app now gives you the ability to record a voice message to send directly to the recipient, and also gives access to the camera to allow for still shots or video clips to be recorded and sent. When you tap on the record button, the recording begins immediately, allowing you to send the message by simply releasing the button. If you swipe off of the button while recording, you can release to cancel it, similar to the way that any other button on iOS cancels an action but with a nice visual cue.
The ability to record and post video right from within the app is also a new feature with the latest version. A new Nearby tab has cleaned up the interface significantly and offers a more Foursquare-like interface for finding interesting things to see and do based on Facebook’s Knowledge Graph. The content of the Nearby tab has yet to show the refinement and usefulness of Foursquare though. There are simply too many random entries that reference someone’s personal checkins in your network. You’re not so much interested in someone’s backyard deck as you might be in a public venue for food or entertainment. In this aspect, at least, Facebook’s local search efforts continue to exist as a side note to its other efforts like photo and video messaging.
➤ Facebook for iOS
More to follow
Despite declining display unit, Yahoo beats with Q4 non-GAAP EPS of $0.32, revenue of $1.22 billion Share on Facebook (93)
Content Marketing is all the Marketing that's left. | 科技 |
2016-40/3983/en_head.json.gz/11755 | July Program Changes Click here to print a schedule U.S. Agencies, Tech Firms Agree To Rules On Surveillance Info By Bill Chappell
Jan 27, 2014 ShareTwitter Facebook Google+ Email Originally published on January 27, 2014 6:53 pm Internet companies that receive U.S. government requests for information about their customers will be able to disclose more details about surveillance than has been allowed, according to a deal announced today by the Justice Department. The shift will allow technology and communications companies "to publish the aggregate data ... relating to any orders issued pursuant to the Foreign Intelligence Surveillance Act (FISA)" — and in more ways than had been previously allowed. While the agreement gives tech companies more options in publishing data about government requests for information, it also includes several limitations. For instance, delays of six months and two years are required for some types of information. The agreement also specifies the "bands" of numbers the companies can use in reporting "national security processes." For instance, if a company decides to report all actions by individual type, such as National Security Letters or FISA orders, it would have to do so in groups of 1,000. But if the company reports those actions as a batch, it can do so "in bands of 250." The shift, which was reflected in filings today with the Foreign Intelligence Surveillance Court, or FISC, is part of President Obama's plan to change how U.S. intelligence agencies gather data, according to a government release announcing the change. Earlier this month, the president discussed several reforms to the U.S. intelligence apparatus, calling on federal agencies to change how they collect, store and use data about American citizens. Update at 6:40 p.m. ET: Reaction To The Change "It is a big deal, but it's only a first step," says staff attorney Nate Cardozo of online privacy advocate the Electronic Frontier Foundation, speaking to our Newscast unit about today's announcement. He adds, "We were really looking for an agreement where they could disclose the exact number of requests that they'd gotten." As for what effect the change might have, Cardozo says he thinks people may soon have a better idea of "the scope of the surveillance state in this country." "All of these requests are made in secret," Cardozo says of U.S. agencies' requests for information, "and none of the targets of the requests will be notified that the government is seeking their data. But we as the American public will start to know more about the number of times that the government comes to the tech companies" looking for citizens' data. Our original post continues: Today's move comes weeks after NSA officials said they would welcome a public advocate at the FISA court, as Mark reported for The Two-Way. Several companies had sought to release more of that information, hoping to reassure their customers they weren't giving U.S. spy agencies broad and unfettered access to their databases and systems — a possibility that arose from the recent revelations about U.S. spy programs included in documents given to the media by former U.S. contractor Edward Snowden. Papers filed today with the FISC seek to answer several tech companies' requests to publish the aggregate information under a First Amendment right. The companies named in the papers are Google, Microsoft, Yahoo, Facebook and LinkedIn. Here's the full statement from the Justice Department, released on behalf of Attorney General Eric Holder and Director of National Intelligence James Clapper: "As indicated in the Justice Department's filing with the Foreign Intelligence Surveillance Court, the administration is acting to allow more detailed disclosures about the number of national security orders and requests issued to communications providers, the number of customer accounts targeted under those orders and requests, and the underlying legal authorities. Through these new reporting methods, communications providers will be permitted to disclose more information than ever before to their customers. "This action was directed by the President earlier this month in his speech on intelligence reforms. While this aggregate data was properly classified until today, the office of the Director of National Intelligence, in consultation with other departments and agencies, has determined that the public interest in disclosing this information now outweighs the national security concerns that required its classification. "Permitting disclosure of this aggregate data addresses an important area of concern to communications providers and the public. But more work remains on other issues. In the weeks ahead, additional steps must be taken in order to fully implement the reforms directed by the President. "The declassification reflects the Executive Branch's continuing commitment to making information about the Government's intelligence activities publicly available where appropriate and is consistent with ensuring the protection of the national security of the United States." Copyright 2014 NPR. To see more, visit http://www.npr.org/. © 2016 WQCS | 科技 |
2016-40/3983/en_head.json.gz/11767 | Sizing Down Food Waste: What's The Worst Thing To Toss? By Michaeleen Doucleff
Jul 17, 2014 ShareTwitter Facebook Google+ Email Throwing out a pound of boneless beef effectively wastes 24 times more calories than throwing out a pound of vegetables or grains. Egg and dairy products fall somewhere between the two extremes.
Morgan Walker
Originally published on July 17, 2014 7:11 pm Sometimes I feel like a broken record at home: "Let's eat the leftovers for dinner, so they don't go to waste," But inevitably, Sunday night's pasta and meatballs get tossed out of the refrigerator to make way for Friday night's pizza. Now scientists at the University of Minnesota offer up another reason to put those leftover meatballs in the tummy instead of the garbage: There are hidden calories in the beef that go to waste when you toss it. These invisible calories could help out the 1 in 6 Americans who don't get enough to eat each day, just as easily as the meatballs themselves. And when you add them all up, these hidden calories are enough to help the world feed a rapidly rising population, ecologists report Thursday in the journal Science. About a third of all food grown around the world never gets eaten. Americans alone waste up to about 1,200 calories per person each day. But not all these calories are equal, when you look at how they hurt the global food supply, says ecologist Paul West, who led the study. Discarding a pound of boneless beef effectively wastes 24 times more calories than discarding a pound of wheat, West and his team report. Why? Because the beef also contains all the calories in the corn that fed the cow. "If you throw out some arugula at a fancy restaurant in upstate New York, it doesn't have much impact on the world's food system," West says. "But throwing out a small steak has a huge impact — maybe more than all the arugula in the restaurant put together." Wasting other animal products, such as chicken, eggs and dairy, has less effect on the global food supply than beef, but still more than vegetables and grains, the study found. (Not too mention all the extra water, fertilizer and energy needed to raise animals for food.) West and his colleagues have been searching for new ways to increase the world's food supply. "We have a huge challenge of feeding people now and in the future," he says. "The way we grow and consume our foods is unsustainable." The team quickly zeroed in on curbing food waste as a top strategy. "At least in terms of calories per person, cutting food waste is more of an immediate opportunity to feed more people than increasing crop yields around the world," West says. The U.S., China and India together throw out enough food each year to feed more than 400 million people, the team found. And the biggest contributor to that loss is beef discarded in U.S. Each day the average American throws out 290 effective calories from beef. We also waste about 550 calories from chicken, pork and grains. On the flip side, India wastes the least amount of food and meat of the three countries. Each Indian, on average, effectively tosses out about 44 calories a day, mostly rice and wheat. China fell between the U.S. and India. Each Chinese person wastes about 280 calories of wheat and rice every day. But the Chinese also love pork. And each person effectively tosses 200 calories from pork each day. "The food service industry in China also has really high amounts of waste," West says. "It's a cultural standard, when you're having an event, to honor all the people that come with a seven- to nine-course buffet. All that food doesn't get eaten." Which reminds me: I've got some leftover broccoli with beef in the refrigerator that I am definitely eating tonight.Copyright 2014 NPR. To see more, visit http://www.npr.org/. View the discussion thread. © 2016 WVTF | 科技 |
2016-40/3983/en_head.json.gz/11775 | Home Research Areas Scattering
A Surprising Path for Proton Transfer Without Hydrogen Bonds
Hydrogen bonds are found everywhere in chemistry and biology and are critical in DNA and RNA. A hydrogen bond results from the attractive dipolar interaction of a chemical group containing a hydrogen atom with a group containing an electronegative atom, such as nitrogen, oxygen, or fluorine, in the same or a different molecule. Conventional wisdom has it that proton transfer from one molecule to another can only happen via hydrogen bonds. Recently, a team of Berkeley Lab and University of Southern California researchers, using the ALS, discovered to their surprise that in some cases, protons can find ways to transfer even when hydrogen bonds are blocked.
Sometimes You Have to Go to Plan B
A proton (the nucleus of a hydrogen atom) is the currency of many of the biochemical reactions that take place in nature, traveling from one molecule to another as the reaction proceeds. Scientists have believed that the usual pathway for proton transfer is via so-called hydrogen bonds, which are the simple result of an electrical attraction between a positively charged proton and a negatively charged atom in another molecule. Hydrogen bonds are everywhere in nature. Two examples: they bind individual water molecules into liquid water and ice, and they hold together the two strands of a DNA molecule. But Golan et al. asked themselves: Does proton transfer really require the assistance of hydrogen bonds, or could the protons find another path?
In an experimental and theoretical study, the researchers demonstrated that protons are actually not obligated to travel along hydrogen bonds. In effect, when there's no straight road between molecules, they can rearrange themselves upon ionization to correct the misalignment. Their finding suggests that without hydrogen bonds, protons may still move efficiently in stacks of molecules, which are common in plants, membranes, DNA, and elsewhere. Armed with this new knowledge, scientists may, for example, be able to better understand chemical reactions involving catalysts, how biomass (plant material) can be used as a renewable fuel source, and how melanin (which causes skin pigmentation) protects our bodies from the sun's rays and damage to DNA.
To understand how bases are bonded in staircase-like molecules such as DNA and RNA, the USC group made computer models of paired, ring-shaped uracil molecules and investigated what might happen to these dimers when they were ionized. Uracil is one of the four nucleobases of RNA. The group modeled the uracil dimer 1,3-dimethyluracil. The purpose was to block hydrogen bonding of the two identical monomers of the dimer by attaching a methyl group to each; methyl groups are poison to hydrogen bonds.
Uracil is one of the four bases of RNA (carbon atoms are brown, nitrogen purple, oxygen red, hydrogen white). Because methyl groups discourage hydrogen bonding, methylated uracil should be incapable of proton transfer. But after ionization of methylated uracil dimers, a proton moves from one monomer to the other by a different route.
The uracils could still bond in the vertical direction by means of π bonds, which are perpendicular to the usual plane of bonding among the flat rings of uracil and other nucleobases. So called "π stacking" is important in the configuration of DNA and RNA, in protein folding, and in other chemical structures as well, and π stacking was what interested the USC researchers. They brought their theoretical calculations to Berkeley Lab for experimental testing at the ALS's Chemical Dynamics Beamline 9.0.2.
To examine how the molecules were bonded, the team first created a gaseous molecular beam of methylated uracil monomers and dimers, then ionized them with vacuum ultraviolet light from the ALS. The resulting species were analyzed in a mass spectrometer. What the collaboration expected to see was that if the monomers were bonded, they would be stacked on top of each other. Instead, they found that when ionized, some uracil dimers had fallen apart into monomers, some of which carried an extra proton, i.e., proton transfer had occurred.
To test the hypothesis that the source of the proton was the methyl group, the researchers invited colleagues from Berkeley Lab's Molecular Foundry to join the collaboration. They created methyl groups in which the hydrogen atoms were replaced by deuterium atoms. The molecular beam experiment was repeated with deuterium-containing uracil, and once again some of the methylated uracil dimers fell apart into monomers upon ionization, but this time they were deuterated. This proved that, indeed, the transferred protons came from the methyl groups.
Just as important, proton transfer was seen to follow a very different route from the usual hydrogen bond pathway, following instead one that involved significant rearrangements of the two uracil dimer fragments to allow protons from hydrogen atoms in the methyl group on one monomer to move closer to an oxygen atom in the other.
Left: Molecular orbitals at the initial (MIN), transition state (TS), and final (PT) structures (with a circle marking the transferred proton) demonstrate the evolution of the wave-function along the proton transfer pathway. Right: A two-dimensional potential-energy-surface scan shows the proton transfer path in the dimer ion involving a concerted change in the distances between the proton and the donating (C–H) and accepting (O–H) atoms.
This result means there could be unsuspected pathways for proton transfer in RNA and DNA and other biological processes upon ionization, especially those that involve π-stacking, as well as in environmental chemistry and in purely chemical processes like catalysis. The next step is a series of experiments to directly map proton transfer rates and gain structural insight into the transfer mechanism, with the goal of visualizing these unexpected new pathways for proton transfer.
Research conducted by: A. Golan and S.R. Leone (Berkeley Lab and University of California, Berkeley); K.B. Bravaya and A.I. Krylov (University of Southern California); and R. Kudirka, O. Kostko, and M. Ahmed (Berkeley Lab).
Funding: U.S. Department of Energy, Office of Basic Energy Sciences (BES); Defense Threat Reduction Agency; and the National Science Foundation. Operation of the ALS is supported by BES.
Publication about this research: A. Golan, K.B. Bravaya, R. Kudirka, O. Kostko, S.R. Leone, A.I. Krylov, and M. Ahmed, "Ionization of dimethyluracil dimers leads to facile proton transfer in the absence of H-bonds," Nature Chem. 4, 323 (2012).
ALS Science Highlight #252 | 科技 |
2016-40/3983/en_head.json.gz/11785 | Should You Buy an E-reader?
8 things to consider before making a purchase
by Steve Morgenstern, June 13, 2012|Comments: 0
E-book reader options have changed radically in the past year — some models are priced as low as $79, and many more now offer opportunities to enjoy music, video, games and all sorts of apps.
See also: Top techno-gadget entertainment trends.
With an e-reader you can carry hundreds of books with you. — Photo by Philip and Karen Smith/Getty Images
How to choose? The most basic distinction between models today is black-and-white or color. With this in mind, here are eight questions to ask when considering a purchase.
1. Are you planning to read outdoors?
The black-and-white screen models are far less susceptible to glare than color e-readers. They look fine under modest lighting (such as the overhead light on a plane) but remain entirely readable while sitting on a beach or a park bench, where the overhead sun will turn a color screen into a solar reflector — that's great for getting a tan, but not much use for reading.
2. Are you planning to read in bed?
Most monochrome e-book readers don't have a built-in light, and while some include accessory lights, a color model that uses a standard color LCD screen could be a better choice if your plan is to do a lot of bedtime reading. There is one intriguing new exception among monochrome readers, though: the new Nook Simple Touch with GlowLight.
3. How often will you recharge the battery?
Battery life is a key advantage for black-and-white e-book readers, which can run a full month or more without recharging. The e-ink display used in these devices uses some power to change pages, but not to keep them visible. Color e-book readers, on the other hand, with their LCD screens, will last just a day or two with heavy use before needing a recharge.
4. Will you be reading magazines or children's books?
Magazines and children's books look drab at best on a black-and-white screen, while the vivid colors of a color LCD capture all the graphic goodness of the original publication. Most magazine pages don't fit comfortably on a small e-book reader screen, though, so expect to do some scrolling around to see everything.
5. Do you want to watch videos or play games on your e-book reader?
The color e-book readers from Amazon and Barnes & Noble double as limited-purpose, seven-inch tablets. They're certainly not as flexible as an iPad or full-on Android tablet — no cameras, no GPS, a comparatively limited selection of apps — but they do a very nice job playing video and MP3 music files, and offer a selection of games and other diversions.
Next: You may already have an e-reader, you just don’t know it. »
6. Do you already own a tablet?
Both Amazon and Barnes & Noble offer free apps that let you read books from their online stores on an existing tablet or smartphone. Reading on a smartphone is fine in a pinch, but it's a chore to constantly turn pages and isn't particularly comfortable. A full-fledged tablet, though, makes a fine reading device, assuming a color screen experience is what you're after. That said, keep in mind that an inexpensive monochrome e-book reader is far more portable than an iPad, the battery lasts much longer, and you can read it in bright sunlight.
7. Do you belong to Amazon Prime?
While it began as a $79 annual service offering unlimited two-day shipping on orders of any size, the Amazon Prime service has branched out in two interesting ways. Members can now download one book a month from a fairly extensive collection for no additional charge (on black-and-white and color Kindles only), and can also stream an expanding variety of movies and TV shows to color Kindles. For Amazon Prime members (or would-be members), there's no reason to stray from the Amazon family, so a Kindle is a solid choice.
8. Do you care about being able to order books from anywhere?
Some e-book readers have built-in 3G cellular connections that let you download books anywhere there's a signal, at no additional charge. 3G e-readers are significantly more expensive, though, and for most, downloading over a Wi-Fi network is convenient enough.
Top Contenders
While there are other companies in the market, there's no good reason to stray from the two biggest players: Amazon with its Kindle series, and Barnes & Noble with the Nook line. Both have online stores with extensive collections of books and periodicals; support from public libraries, which increasingly offer e-book loans; and both black-and-white and color models.
Which e-book readers are most tempting among current offerings? Here are my top picks:
Amazon Kindle ($79). The value leader in the category, with a top-notch screen and easy-to-use software. Note that the $79 price is for the model with "special offers" (you know, ads). But the ads never show up in anything you're reading — they're just on the screensaver page and the menu page, and some of the offers are pretty tempting. No need to spend $30 more for the ad-free version.
Amazon Kindle Touch ($99 with special offers) and Barnes & Noble Nook Simple Touch ($99). Instead of pressing buttons to move from page to page on these similar readers, you swipe across the screen with your finger, mimicking the motion of turning a physical page. There is also a version of the Kindle Touch with 3G cellular service ($149 with special offers).
Barnes & Noble Simple Touch with GlowLight ($139) uses a clever sidelighting scheme to illuminate the page for reading in the dark, while maintaining the ability to read in bright sunshine and most of the considerable battery life advantage of monochrome screens over LCD.
Amazon Kindle Fire ($199) is an impressive multipurpose tablet with a handsome screen, a straightforward interface that's much easier to use than a full-fledged tablet, access to Amazon's extensive app store and, for Amazon Prime members, no-extra-charge book loans and videos.
Barnes & Noble Nook Tablet ($199 with 8 gigabytes of storage, $249 with 16 gigabytes of storage) doesn't offer the same extensive selection of downloadable video available on the Kindle Fire (though both support Netflix), but it has a feature the Kindle lacks: a memory expansion slot, so you can add up to 32 gigabytes of additional storage for music, video, photos and so on.
You may also like: 50 great apps for your smart device. | 科技 |
2016-40/3983/en_head.json.gz/11837 | News, reviews, information and apps for Nokia and Symbian.
Review: Nokia N96
Score:82% Steve Litchfield starts the All About Symbian review of Nokia's new Nseries flagship, the N96. Part 2 will be a walk through its applications and unique selling points and part 3 will look in detail at the N96's multimedia capabilities.
Author: Nokia In case you're in a hurry, I can summarise my entire Nokia N96 review in two one word answers.
Q. Is the N96 as bad as some critics have made it out to be? No.
Q. So is the N96 as good as Nokia's mammoth marketing push would have you believe? No.
The truth lies somewhere in between, of course. Though I have to say that I'm siding with Nokia on this one, rather than the geek critics. To be honest, I thought I'd hate the N96 too, not really being a fan of the 'new' Nseries look, but there are a significant number of detailed improvements here, even over the previous N95 8GB flagship, that help the N96 stand out. And, for UK residents at least, there's the not insignificant boost to functionality given by the inclusion of BBC iPlayer.
Regarding the large number of improvements over the N95 8GB, they're not really compelling enough to make an existing owner upgrade unless it's contract renewal time. But it's worth examining these details, since the N95 marque in general and the 8GB model in particular are both so well known, as benchmarks. So, over and above what's in the N95 8GB, we have:
Hardware decoders for H.264 video and digital audio streams (these are crucial to the N96's raison d'etre, meaning much better power efficiency for media playback) 16GB of flash memory (double that of the N95 8GB and very much needed for the amount of video files you're going to be handling) A microSD card slot (for adding yet more storage, up to another 32GB theoretically, taking the maximum total memory for this device to a whopping 48GB) Full USB 2 transfer speeds (the N95 8GB and most previous Nokia S60 devices were limited to 1MB/s, a paltry amount that prevented any serious filling up with video or music. I've clocked the N96 at 6MB/s for reading from the Mass Storage disk (E) and at about 4MB/s for writing to the Mass Storage disk, at which speed an album of WMA-format music transfers in around 12 seconds and a 200MB full length movie transfers in under a minute) Slimmer, at 17mm, quite noticeable in the hand (versus 20mm for the N95 8GB) Dual LED flash, rather than single (at least twice as bright) Kick-stand on the back of the device props the N96 up at a good angle for video watching while eating breakfast! This sounds like a gimmick, but is surprisingly useful. Once you've gotten used to having the stand you quickly take it for granted and it's a cute step on the N96 becoming a decent video platform. Built-in DVB-H digital TV receiver (though only a handful of countries transmit programmes in this way right now - Italy and Finland most prominently) Better positioned stereo speakers (for use in landscape/TV mode) Top slide keys have their own light-up gaming icons in N-Gage mode (not every game really needs them, they're a bit of an afterthought, if I'm honest) Dedicated music and media control keys on front of the phone (so you can use them without having to open the top slide) Lock/unlock key (on the top of the device - I really like this, it saved having to mess around pressing the fiddly power button or going back to the standby screen or open and close the device in order to lock the keypad) Software improvements include S60 3rd Edition FP2 (see our feature here on this), the BBC iPlayer client (widget), plus better connectivity routing through to RealPlayer (which is what makes BBC iPlayer work seamlessly without having to fiddle with obscure settings), and a whole new version of Video Centre that includes feeds for the likes of the BBC, ITV and Sky ('mobisode'-style content only, though). Bundled with 3 months navigation in Nokia Maps, plus an activation code for Tetris on N-Gage. In theory, the review N96 was also bundled with a version of the movie Transformers, but it was nowhere to be seen. (No great loss, I suspect!)
Hmmm.... that is quite a good list of improvements after all... With no downsides, maybe there is enough there to tempt an N95 8GB owner for an upgrade after all. But... and there is always a 'but'... the N96 has attracted criticism over a number of points: The battery is the same one as in the original N95, at 950mAh, which isn't really enough to power a device with a 2.8" screen, Wi-Fi and GPS for intensive use. The N96's hardware media decoders mean that it's more efficient when playing videos and music, without having to run the processor flat out, but there's still the screen to power and you're going to want that at max brightness for video watching. And for general Web/Talk/Email/Photography use, the N96 struggles to a similar degree to the original N95.After all, we're talking basic Physics here and the energy to run the Wi-Fi/3G/GPS links and to operate the camera (for example) is going to be similar to that needed on the N95 units. Will the battery be a showstopper? Probably not, for many people, this isn't a phone for S60 power users, the kind who bought the N95 8GB, for example. And even new users will have to get used to charging their new N96 every night without fail. The front key cluster is a nice idea in principle - add more functionality to your most used part of the phone. But, as others have rightly pointed out, to have 16 separate function keys in such a small area is, to be honest a bit of a mess. OK, ok, a huge mess. The S60/Menu key is a disappointment in that it's too small and hard to hit - I'm guessing that Nokia would rather have people use the intrusive 'multimedia' key instead, but I still don't like this system. The light-up music/media-control keys are a good idea, but their implementation is immature at the moment - they're often lit up (and distracting) when there's nothing to control - I'd have expected them to only do anything when Music Player or RealPlayer was running. The good news is that this is emminently fixable in software and that Nokia are pretty hot on firmware updates to their Nseries range. The camera shutter key takes some getting used to - the points for focussing and taking a shot are a lot 'lower' than on any other camera-toting phone I've ever used. You almost feel like your finger is dipping inside the phone's casing in order to actually get the shot off. Not a huge problem, but certainly enough to cause most users a 'What the...' moment. 'Slow operation' - I have confess that, in contrast to other reviewers, I haven't found my v10 firmware N96 to be slow in general. One thing I have noticed is that the initial boot-up phase goes on for longer. I don't mean the time taken for the standby screen to first appear (32 seconds), I mean the time taken for all the OS's background processes to start and set themselves up. There's a good 30 seconds after the appearance of the standby screen in which anything you ask the phone to do will be horribly slow because the OS is still, in reality, grinding into action. Once allowed to start up fully though (a minute after pressing the power button), everything happens really fast. At least as fast as on any other S60 3rd Edition FP2 phone and often faster. It's all about understanding what's going on under the hood, as I see it. Certainly, I've been advising users for years not to turn their phone off each night, but instead to simply put it in offline or silent modes. Even more so with the OS-heavy N96. You want all those background processes to keep chugging away, doing what they do best rather than having to start from scratch every single morning. Every so often when using the N96, little animations, slowdowns and clues can be noticed, showing that there is indeed more 'clever stuff' going on behind the scenes and undoubtedly sucking up processor power. There's not much anyone can do about this, but hopefully their impact can be reduced as the firmware evolves. The top row of number keys is claimed to be too close to the bottom of the top section, but I had no problems here. Maybe this only affects those with large fingers? 'Buggy firmware' - I've only had a couple of glitches (one each of 'System error' and an OS restart), certainly not enough to warrant issuing a 'stop' order on the sale of N96s. Maybe I've been lucky? I'm sure there are dozens of buggettes still being worked through by Nokia, but I've learned to trust them to keep working and to get it right in time... I was reviewing a v10 unit, by the way, which claimed not to have any 'over the air' or NSU updates available, curious, since v11 has been out for a while. Only 85MB internal flash memory (i.e. the 'C' disk). The lower amount (compared to other recent S60 phones) is a little disappointing, but perhaps understandable given the extra size of the ROM/firmware and all the extra little bits and pieces now built-in. And with 16GB only a single disk away, and with microSD expansion beyond that, I don't think it will cause a problem.
One other factor I've noticed on the performance front is that there's only around 46MB free RAM after booting. Although power users might still hit the limit when attempting to view really heavy web pages, this is still a decent amount for normal operation (c.f. the N95 Classic, even with latest firmware, only has about 30MB free) and I'd expect the figure to rise to well over 50MB free after a firmware update or two. The lower free RAM is again evidence for more of the OS loading 'behind the scenes', ready for faster operation when you need it.
Behind the mammoth marketing campaign by Nokia and its network partners (every TV station, every billboard, every bus stop poster), mainly gunning at the mobile TV aspects of the device (despite the fact that there's no DVB-H coverage in most countries yet) and occasionally at the navigation possibilities, there is of course a 'standard' S60 3rd Edition FP2 smartphone on offer here. The screen's identical to the one in the N95 8GB, the camera's identical to that in the likes of the N85 and N95 (the latter with only single LED flash), the GPS hasn't changed for over a year, ditto Wi-Fi and 3.5G receivers. Most of the interface will be very familiar to all here. Photos and Video Centre officially replace 'Images' and 'Video' in the usual Gallery categories and do a generally comprehensive job. Video Centre in particular is now a lot more mature, has been massively overhauled specifically for the N96 and the various links and directories 'go' somewhere useful. Programmes downloaded from BBC iPlayer appear in Video Centre and can be played in the usual way, with the added bonus that there's a 'Continue watching from where it stopped' feature which is terrifically useful when you're working your way through a downloaded hour long documentary, a bit at a time, during your working day. iPlayer videos are around 100MB per hour, by the way. Which seems a lot (and partly explains why the BBC is pushing so hard to get people to only use iPlayer on a Wi-Fi connection) but with 16GB to spare internally you'll be doing well to have many space problems. iPlayer works best in 'Download' mode, with the actual grabbing of programmes done over Wi-Fi, at home or in the office, when also connected to a mains charger, and then you can watch to your heart's content, with no requirement for continuous 3G or Wi-Fi data coverage and able to run efficiently on battery power alone.
Plenty of reading for new N96 owners! No less than four books/booklets to pore over. Note the Ovi guide!
It should be evident from all the above that the Nokia N96 can, after all, stand on its own at the top of Nokia's range without feeling too embarrassed. Sure, it's imperfect, but which device isn't? I am a little surprised that Nokia didn't go further though. They could have knocked the ball out of the park by:
Using a VGA screen instead of QVGA - the difference isn't huge for most people's eyesight, but this is supposed to be the flagship and I think it deserved a flagship's screen.
Using Xenon flash rather than dual LED. Yes, the latter is useful for night time video recordings, but these are of quite low quality and generally unwatchable, whereas Xenon makes such an enormous difference for low light or night still photos.
It's important not to underestimate the role firmware updates have in any evaluation of the Nokia N96. The N95 classic had five major updates during its lifetime (spanning a year and a half), taking it from underperforming disappointment to superlative smartphone. The N95 8GB had four (spanning a year), going from glitchy star to genuine flagship. The N96, despite running the newer S60 3rd Edition Feature Pack 2, starts from an arguably more mature codebase and even v10 firmware here feels like v12 or even v20 on the N95 or v20 on the N95 8GB. I'd estimate that the N96 will be fully mature by Feburary/March next year, a much quicker pace of maturity than with its predecessors. And, unlike with the N95 range, owners can, in theory, update their firmware without loss of data or having to jump through backup/reinstallation hoops. The result is a far more optimistic future.
With my unashamedly UK-centric hat on, there are questions I'd love to ask the N96 product management team though. One of these is: would the N96 have been released at all if it hadn't been for BBC iPlayer? Yes, iPlayer, which has only been around for a number of months, can be hacked to work in Wi-Fi streaming mode on the likes of the N95, but it's fiddly. I'm guessing that Nokia's N96 team actively worked with the BBC's boffins to make absolutely sure that N96 owners got access to iPlayer with no hassle, adjusting the way RealPlayer on the N96 acquires its Internet connection and resurrecting the OMA DRM system for this device only, to handle time-locked downloaded programmes.
I referred to the Nokia N95 8GB for ages as 'the most densely packed few cubic centimetres of technology I could think of' - the thing just did so many different things. Well, the N96 goes one better in that it adds another thing. TV. Being able to browse through BBC iPlayer, pick out the programmes that I completely forgot to watch or to record, download them in the background at home over Wi-Fi and then watch them in odd moments while travelling, hanging around at school pick up time, making tea, etc. in a seamless, comfortable experience, is terrific. If iPlayer hadn't existed (and for N96 buyers in many other non-UK and non-DVB-H countries that is, of course, a reality) then the N96 is just an improved version of the N95 8GB but nothing to really get excited about. With TV-on-demand on-board, we've got a whole new ballgame and, in the UK, the N96 is worth me giving it quite a bit more attention.
In part 2 of this review, I'll be walking through some of the applications involved in the unique selling points of the N96....
Steve Litchfield, 2 Nov 2008
Reviewed by Steve Litchfield at 17:57 UTC, November 1st 2008
Home > Reviews > Nokia N96
Platforms: S60 3rd Edition Categories: Hardware Please enable JavaScript to view the comments powered by Disqus.
Highlighted Reviews
Nokia DC-19 Universal Portable USB Charger
QuasarMX
Tennis in the Face (iOS, Android AND Symbian)
ChessGenius
Belle Feature Pack 2
Jelly Wars
Neatly
Shortcuts - assigning keyboard combinations on E7, N97 and C6-00 | 科技 |
2016-40/3983/en_head.json.gz/11849 | « Archaeologists and political correctness | A revolution in schools? » Share |
Global Warming: The Courage To Do Nothing (updated)
By Randall Hoven
Is the scientific debate over on global warming? Not according to the American Physical Society* in this year's July's issue of Physics and Society ."With this issue of Physics & Society, we kick off a debate concerning one of the main conclusions of the International Panel on Climate Change (IPCC), the UN body which, together with Al Gore, recently won the Nobel Prize for its work concerning climate change research. There is a considerable presence within the scientific community of people who do not agree with the IPCC conclusion that anthropogenic CO2 emissions are very probably likely to be primarily responsible for the global warming that has occurred since the Industrial Revolution. Since the correctness or fallacy of that conclusion has immense implications for public policy and for the future of the biosphere, we thought it appropriate to present a debate within the pages of P&S concerning that conclusion. This editor invited several people to contribute articles that were either pro or con. Christopher Monckton responded ..." [Emphasis added.]And what did Lord Monckton say?"Some reasons why the IPCC's estimates may be excessive and unsafe are explained. More importantly, the conclusion is that, perhaps, there is no "climate crisis", and that currently-fashionable efforts by governments to reduce anthropogenic CO2 emissions are pointless, may be ill-conceived, and could even be harmful."He examined specific assumptions of the IPCC cited computer models and found that, even using the same models but with more justifiable assumptions, carbon dioxide is not a critical threat to global temperatures."Theoretically, empirically, and in the literature that we have extensively cited, each of the values we have chosen as our central estimate is arguably more justifiable - and is certainly no less justifiable - than the substantially higher value selected by the IPCC. Accordingly, it is very likely that in response to a doubling of pre-industrial carbon dioxide concentration TS will rise not by the 3.26 °K suggested by the IPCC, but by <1 °K."He concluded with "If the concluding equation in this analysis is correct, the IPCC's estimates of climate sensitivity must have been very much exaggerated. There may, therefore, be a good reason why, contrary to the projections of the models on which the IPCC relies, temperatures have not risen for a decade and have been falling since the phase-transition in global temperature trends that occurred in late 2001. Perhaps real-world climate sensitivity is very much below the IPCC's estimates. Perhaps, therefore, there is no "climate crisis" at all. At present, then, in policy terms there is no case for doing anything. The correct policy approach to a non-problem is to have the courage to do nothing." [Emphasis added.]*According to Wikipedia http://en.wikipedia.org/wiki/American_Physical_Society, the American Physical Society was founded in 1899 and is the second largest association of physicists in the world, with over 40,000 members. Update: According to http://www.aps.org/APS Climate Change StatementAPS Position Remains UnchangedThe American Physical Society reaffirms the following position on climate change, adopted by its governing body, the APS Council, on November 18, 2007:"Emissions of greenhouse gases from human activities are changing the atmosphere in ways that affect the Earth's climate."An article at odds with this statement recently appeared in an online newsletter of the APS Forum on Physics and Society, one of 39 units of APS. The header of this newsletter carries the statement that "Opinions expressed are those of the authors alone and do not necessarily reflect the views of the APS or of the Forum." This newsletter is not a journal of the APS and it is not peer reviewed. Read: APS Climate Change Statement | 科技 |
2016-40/3983/en_head.json.gz/12026 | World January 11 2013
Larger than life Steve Jobs tops CEO ranking
Steve Jobs has topped a list of the world’s best chief executives - despite his death in 2011.The global list, compiled by French business school Insead for Harvard Business Review, ranked the former Apple head first, followed by Amazon’s Jeff Bezos in second and Samsung’s retired boss Yun Jong-Yong in third.
Steve Jobs is credited with creating long-term value for investors. Photo: Getty Images
Top 10 chief executives Mr Jobs topped the list for this year despite passing away in October 2011 as the analysis looks back at the performances of chief executives of big companies between 1995 and August 2012.It takes in how much total shareholder returns changed during the chief executive’s time in charge, and the increase in the company’s market capitalisation.The survey of 3143 CEOs, which bases its ranking on returns and market value change, credited Mr Jobs with significantly increasing Apple's long-term value, saying his posthumous results were even more impressive than when he topped the list three years ago.
‘‘It comes as no surprise that the best-performing CEO over the past 17 years was Steve Jobs of Apple, who was No.1 on our 2010 list as well,’’ said Morten Hansen, a management professor at the University of California, Berkeley and at Insead.‘‘From 1997 to 2011, Apple’s market value increased by $US359 billion, and its shareholder return experienced average compound annual growth of 35 per cent. That remarkable accomplishment is likely to go unbeaten for a long time.’’The journal said it chose to focus on the chief executives’ ability to created long-term value for their companies, rather than what is usually expected of them - short-term financial results.In an interview with the the journal, Mr Bezos said a long-term approach to management was essential for invention.‘‘I care very much about our shareowners, so I care very much about our long-term share price. I do not follow the stock on a daily basis, because I don’t think there’s any information in it,’’ Mr Bezos said.‘‘The economist Benjamin Graham once said, ‘In the short term, the stock market is a voting machine. In the long term, it’s a weighing machine.’ We try to build a company that wants to be weighed, not voted on.’’The highest-ranked woman was Meg Whitman, who came in at ninth for her time at eBay. She is now in charge of computer giant Hewlett Packard.Only 1.9 per cent of the chief executives that were studied were women, the journal said.The Economist magazine also noted the list appeared to show that ‘‘being a good corporate citizen ... does not make for a successful firm’’.‘‘There seems to be no correlation between whether a boss has a good record on sustainability and the performance of the firm under his tenure,’’ the magazine noted.‘‘Indeed the researchers could point to only a handful of CEOs who performed well on both metrics, including Adidas’s Herbert Hainer and Danone’s Franck Riboud.’’BusinessDay
90 cents in every dollar of executive performance pay is for luck, not good management: study | 科技 |
2016-40/3983/en_head.json.gz/12054 | International Journal of Biology
Vol 4, No 4 (2012) > Sloan
Invariant Feeding Kinematics of Two Trophically Distinct Nonnative Florida Fishes, Belonesox belizanus and Cichlasoma urophthalmus across Environmental Temperature Regimes
Tyler J. Sloan, Ralph G. Turingan
Nonnative fishes have the ability to adapt to environmental conditions in the invaded ecosystem and utilize resources that may have been absent in their native ecosystem. Belonesox belizanus and Cichlasoma urophthalmus are both nonnative fishes in Florida. Ecomorphological studies conclude that C. urophthalmus is a trophic generalist while B. belizanus is a trophic specialist. The current Florida distribution of these species indicates that C. urophthalmus spreads northerly into the colder regions of Florida at a faster rate than B. belizanus. Is it conceivable that this variation in rate of spread is due to differences in temperature response between these ecomorphologically distinct nonnative fishes? This study was designed to test the hypothesis that the prey-capture kinematics and behavior differ between C. urophthalmus and B. belizanus at a given temperature and across temperatures. Two-Way Repeated Measures Multivariate Analysis of Covariance (MANCOVAR) revealed that (1) at a given temperature, excursion and timing variables differed between species and (2) the kinematics of prey-capture did not vary across temperatures in both species. This interspecific comparison suggests that both species have the same temperature tolerance and that any difference in their rate of spread across Florida may be driven by factors other than species-specific physiological tolerance to temperature.
DOI: http://dx.doi.org/10.5539/ijb.v4n4p117 International Journal of Biology ISSN 1916-9671(Print) ISSN 1916-968X (Online)Copyright © Canadian Center of Science and EducationTo make sure that you can receive messages from us, please add the 'ccsenet.org' domain to your e-mail 'safe list'. If you do not receive e-mail in your 'inbox', check your 'bulk mail' or 'junk mail' folders.------------------------------------------------------------------------------------------------------------------------------ | 科技 |
2016-40/3983/en_head.json.gz/12066 | Images by Date
Images by Category
Images by Interest
Space Scoop for Kids
4K JPG
Multiwavelength
Sky Map
Image Handouts
High Res Prints
Fits Files
Image Tutorials
Photo Album Tutorial
False Color
Cosmic Distance
Look-Back Time
Scale & Distance
Angular Measurement
Images & Processing
AVM/Metadata
Getting Hard Copies
Image Use Policy Web Shortcuts
Get Adobe Reader Panelist Biographies
Alexey Vikhlinin
Alexey Vikhlinin is an astrophysicist at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., and a senior researcher at the High Energy Astrophysics division of Moscow's Space Research Institute. After receiving his Ph.D. in Moscow in 1995, Vikhlinin came to the United States where his main research is on X-ray studies of galaxy clusters and their application for cosmology and the physics of the intergalactic medium. He was recently co-awarded the 2008 Rossi Prize from the American Astronomical Society for his work on cluster cosmology and cold fronts.
William Forman
William Forman is an astrophysicist at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. He was an undergraduate at Haverford College and completed his Ph D at Harvard University using X-ray observations from the UHURU satellite. He has continued his research on galaxies and galaxy clusters primarily using X-ray observations from the Einstein, ROSAT, XMM-Newton, and Chandra Observatories. For the Chandra Observatory, Forman developed and managed the Science Mission Planning operations (1991-2006). He was awarded (with Christine Jones) the first Rossi Prize (1985) for detecting hot gaseous coronae around bright elliptical galaxies.
David Spergel
David Spergel is a theoretical astrophysicist and chair of the department of Astrophysical Sciences at Princeton University. Over the past several years, his main research focus has been the results from NASA's WMAP satellite, which has produced a convincing census of the contents of the Universe and erased lingering doubts about the existence of dark energy. He is also part of the new Princeton Center for Theoretical Physics as well as the Institute for the Physics and Mathematics of the Universe. Spergel belongs to numerous professional societies and serves on many advisory boards and scientific review panels. | 科技 |
2016-40/3983/en_head.json.gz/12146 | Study ties oil, gas production to Midwest quakes
05:51 PM, Friday, April 06 2012 | 465 views | 0 | 4 | | NEW YORK (AP) — Oil and gas production may explain a sharp increase in small earthquakes in the nation’s midsection, a new study from the U.S. Geological Survey suggests.The rate has jumped six-fold from the late 20th century through last year, the team reports, and the changes are “almost certainly man-made.” Outside experts were split in their opinions about the report, which is not yet published but is due to be presented at a meeting later this month. The study said a relatively mild increase starting in 2001 comes from increased quake activity in a methane production area along the state line between Colorado and New Mexico. The increase began about the time that methane production began there, so there’s a “clear possibility” of a link, says lead author William Ellsworth of the USGS.The increase over the nation’s midsection has gotten steeper since 2009, due to more quakes in a variety of oil and gas production areas, including some in Arkansas and Oklahoma, the researchers say.It’s not clear how the earthquake rates might be related to oil and gas production, the study authors said. They note that others have linked earthquakes to injecting huge amounts of leftover wastewater deep into the earth.There has been concern about potential earthquakes from a smaller-scale injection of fluids during a process known as hydraulic fracturing, or fracking, which is used to recover gas. But Ellsworth said Friday he is confident that fracking is not responsible for the earthquake trends his study found, based on prior studies. The study covers a swath of the United States that lies roughly west of Ohio and east of Utah. It counted earthquakes of magnitude 3 and above. Magnitude 3 quakes are mild, and may be felt by only a few people in the upper floors of buildings, or may cause parked cars to rock slightly. The biggest counted in the study was a magnitude-5.6 quake that hit Oklahoma last Nov. 5, damaging dozens of homes. Experts said it was too strong to be linked to oil and gas production. The researchers reported that from 1970 to 2000, the region they studied averaged about 21 quakes a year. That rose to about 29 a year for 2001 through 2008, they wrote, and the three following years produced totals of 50, 87 and 134, respectively.The study results make sense and are likely due to man-made stress in the ground, said Rowena Lohman, a Cornell University geophysicist.“The key thing to remember is magnitude 3s are really small,” Lohman said. “We’ve seen this sort of behavior in the western United States for a long time.” Usually, it’s with geothermal energy, dams or prospecting. With magnitude 4 quakes, a person standing on top of them would at most feel like a sharp jolt, but mostly don’t last long enough to be a problem for buildings, she said.The idea is to understand how the man-made activity triggers quakes, she said. One possibility is that the injected fluids change the friction and stickiness of minerals on fault lines. Another concept is that they change the below-surface pressure because the fluid is trapped and builds, and then “sets off something that’s about ready to go anyway,” Lohman said.But another expert was not convinced of a link to oil and gas operations.Austin Holland, the Oklahoma state seismologist, said the new work presents an “interesting hypothesis” but that the increase in earthquake rates could simply be the result of natural processes.Holland said clusters of quakes can occur naturally, and that scientists do not yet fully understand the natural cycles of seismic activity in the central United States. Comprehensive earthquake records for the region go back only a few decades, he said, while natural cycles stretch for tens of thousands of years. So too little is known to rule out natural processes for causing the increase, he said.
U.S. rig count down 2 this week to 506
Bayer AG buying Monsanto
By LINDA A. JOHNSON and DAVID McHUGH
University to give slave descendants help on admissions | 科技 |
2016-40/3983/en_head.json.gz/12161 | Search Carnegie Mellon Researchers Make Breakthrough in Increasing Accuracy of Face Recognition by Machine BY Byron Spice - Wed, 2003-02-12 12:00 PITTSBURGH—Carnegie Mellon researchers have developed a system that increases the accuracy of face recognition by computer. After a slow start in the 1970s, interest and progress in face recognition technology has exploded recently as applications in multi-media began to emerge in the 1990s and exploded as its role in security applications since Sept. 11, 2001. began to attract international attention. The basis of the new technology is Carnegie Mellon's PIE (which stands for Pose, Illumination and expression) Database, developed under the direction of university professor and internationally renowned vision expert Takeo Kanade. Between October and December 2000 we collected a database of 41,368 images of 68 people. By extending the CMU 3D Room wewere able to image each person under 13 different poses, 43 different illumination conditions, and with 4 different expressions. Wecall this database the CMU Pose, Illumination, and Expression (PIE) database. People have the ability to recognize the identity of a human face from pictures taken in various poses, under different lighting conditions, and even when they haven't seen the person for a long time. Computers don't have this expertise. Attempts to give computers the ability to recognize a human face began more than 30 years ago. My Ph.D. thesis detailed one of the earliest computer programs that tried to automate the process of face recognition, including digitizing a face, finding its location, localizing its features, computing various attribute values and recognizing its identity. Automating human face recognition is a very difficult task, especially if one wishes to deal with a variety of poses and different kinds of illumination. In fact, the Face Recognition Vender Test 2000, sponsored by the Department of Defense and the National Institute of Justice, reports that the recognition rate by representative face recognition programs drops by 20 percent under different illumination conditions, and as much as 75 percent for different poses. The first figure below illustrates the face recognition problem. We have to deal with at least three axes of variables: Person, pose and illumination. There are a very large number of possible images (shown in each plane) due to different poses and lighting conditions. The following is a typical face recognition problem. Given a gallery of facial images of many people taken in a particular pose and under varying lighting conditions (that is, one image from the whole set of possible images of each person), tell which plane (i.e. person) a face image at hand, called a probe image, belongs to, despite the fact that the probe image is likely to be very different from the gallery image of the same person and, of course, from that of other people as well. In order to cope with the difficulty, one needs to nullify the effect of illumination and consider how the facial features appear to change due to variations in pose. To study this, we have developed the PIE Image Database. A subject sits in a room with 13 cameras and 17 flashes, each positioned to look at him/her from various angles. Images of all the combinations of poses and illumination angles were collected for 68 people. After three months, another set of images of the same subjects was collected. Using the PIE Database, we have been developing an automated face recognition system that can recognize people in different poses and under different types of illumination. The structure of the system is illustrated in the second figure. After finding the location of the face in the image, the first step is to deal with the effect of illumination. In general, the intensity of an image is formed as a product of reflectance and illuminance. The reason that people seem to be able to cope with various lighting conditions in a real physical environment is that they "perceive" reflectance without noting its intensity. Obviously, "computing" reflectance given only intensity is an ill-posed problem; we cannot know the components given only their product. However, it has been shown that it is possible to estimate reflectance from intensity as a solution of a large partial differential equation by imposing anisotropic smoothness, which stimulates the function of peoples' retinal horizontal and amacrine cells..This "normalized" image is the input to the remaining process. Then facial landmarks, such as eyes and nose, are located, and a set of small areas is defined with respect to those landmark positions. Various attribute values of those areas are computed, such as intensity distribution, edge distribution, edge orientations, etc. Those attribute values are compared with those of gallery images to make a decision to whom the inset probe image is "closest."However, the key technique of our system is that we model and take into account how those attributes change as the pose changes. We have examined, analyzed and modeled such changes beforehand by using the PIE Database, since it consists of images of known pose and illumination conditions. The decision making is done by properly weighting the attributes based on the model. Naturally, the system does not know the pose of the input probe face image, but a technique of (A?) hidden variable in probabilistic modeling can still take advantage of the attribute change model. We have shown that the system can handle up to plus/minus 35 degrees of pose and illumination variables without reducing the recognition rate more than five percent. I will show various examples during my presentation. What is the PIE Database? You never sayIn graph 7, after finding the location of the face in the image (how?)Are we the only people who have made this discovery and developed a system? Is this patentable technology? You said it's based on Simon's system. Can that be described to me? Can you give me quote about why this is important? How it represents a breakthrough? For More Information: Byron Spice | 412-268-9068 | [email protected]
Upcoming Events [More] Monday, October 3, 2016 - 12:00pm
Computer Science Speaking Skills Talk KUEN BANG (FAVONIA) HOU
Traffic21 Classroom 6501 Gates&Hillman Centers
Monday, October 3, 2016 - 12:15pm
Language Technologies Institute Seminar ANDREW MCCALLUM
Monday, October 3, 2016 - 4:45pm
Database Seminar ASHRAF ABOULNAGA
8102 Gates&Hillman Centers
BOEING: Info Session and New-Hire/Intern Panel Peter/Wright/McKenna Cohon University Center
Film Night Paying the Price for Peace: The Story of S. Brian Willson
McConomy Auditorium - First Floor Cohon University Center
Enter your email to signup for ByteSize, the SCS e-newsletter: | 科技 |
2016-40/3983/en_head.json.gz/12178 | Reddit co-founder dies in NY weeks before trial
Published: Monday, Jan. 14, 2013 5:30 a.m.�CDT
Caption(AP Photo/Michael Francis McElroy – The New York Times, )Internet activist Aaron Swartz poses for a photo Jan. 30, 2009, in Miami Beach, Fla. Swartz was found dead Friday in his Brooklyn, N.Y., apartment, according to Ellen Borakove, spokeswoman for New York's medical examiner. Swartz, 26, was scheduled to face trial on hacking charges in a few weeks. By VERENA DOBNIK - The Associated PressNEW YORK – The family of a Reddit co-founder who committed suicide weeks before he was to go to trial on federal charges that he stole millions of scholarly articles is blaming prosecutors for his death.
Aaron Swartz hanged himself in his Brooklyn apartment Friday, his family and authorities said. The 26-year-old had fought to make online content free to the public and as a teenager helped create RSS, a family of Web feed formats used to gather updates from blogs, news headlines, audio and video for users.In 2011, he was charged with stealing millions of scientific journals from a computer archive at the Massachusetts Institute of Technology in an attempt to make them freely available.He had pleaded not guilty, and his federal trial was to begin next month. If convicted, he faced decades in prison and a fortune in fines.In a statement released Saturday, Swartz’s family in Chicago expressed not only grief over his death but also bitterness toward federal prosecutors pursuing the case against him in Massachusetts.“Aaron’s death is not simply a personal tragedy. It is the product of a criminal justice system rife with intimidation and prosecutorial overreach. Decisions made by officials in the Massachusetts U.S. Attorney’s office and at MIT contributed to his death,” they said.Elliot Peters, Swartz’s California-based defense attorney and a former federal prosecutor in Manhattan, told The Associated Press on Sunday that the case “was horribly overblown” because Swartz had “the right” to download from JSTOR, a subscription service used by MIT that offers digitized copies of articles from more than 1,000 academic journals.Peters said even the company took the stand that the computer crimes section of the U.S. Attorney’s office in Boston had overreached in seeking prison time for Swartz and insisting that he plead guilty to all 13 felony counts. Peters said JSTOR’s attorney, Mary Jo White had called Stephen Heymann, the lead Boston prosecutor in the case.“She asked that they not pursue the case,” Peters said.Reached at his home in Winchester, Mass., Heymann referred all questions to a spokeswoman for the U.S. Attorney’s office in Boston, Christina DiIorio-Sterling. She did not immediately respond to an email and phone message from the AP seeking comment.A zealous advocate of public online access, Swartz was extolled Saturday by those who believed as he did. He was “an extraordinary hacker and activist,” the Electronic Frontier Foundation, an international nonprofit digital rights group based in California wrote in a tribute on its home page.
“Playing Mozart’s Requiem in honor of a brave and brilliant man,” tweeted Carl Malamud, an Internet public domain advocate who believes in free access to legally obtained files.Swartz co-founded the social news website Reddit, which was later sold to Conde Nast, as well as the political action group Demand Progress, which campaigns against Internet censorship.He apparently struggled at times with depression, writing in a 2007 blog post: “Surely there have been times when you’ve been sad. Perhaps a loved one has abandoned you or a plan has gone horribly awry. ... You feel worthless. ... depressed mood is like that, only it doesn’t come for any reason and it doesn’t go for any either.”Harvard law professor Lawrence Lessig, faculty director of the Safra Center for Ethics where Swartz was once a fellow, wrote: “We need a better sense of justice. ... The question this government needs to answer is why it was so necessary that Aaron Swartz be labeled a ‘felon.’”Before the Massachusetts’ case, Swartz aided Malamud in his effort to post federal court documents for free online, rather than the few cents per page that the government charges through its electronic archive, PACER. Swartz wrote a program in 2008 to legally download the files using free access via public libraries, according to The New York Times. About 20 percent of all the court papers were made available until the government shut down the library access.The FBI investigated but didn’t charge Swartz, he wrote on his website.Three years later, Swartz was arrested in Boston. The federal government accused Swartz of using MIT’s computer network to steal nearly 5 million academic articles from JSTOR.Prosecutors said Swartz hacked into MIT’s system in November 2010 after breaking into a computer wiring closet on campus. Prosecutors said he intended to distribute the articles on file-sharing websites.JSTOR didn’t press charges once it reclaimed the articles from Swartz, and some legal experts considered the case unfounded, saying that MIT allows guests access to the articles and Swartz, a fellow at Harvard’s Safra Center for Ethics, was a guest.Experts puzzled over the arrest and argued that the result of the actions Swartz was accused of was the same as his PACER program: more information publicly available.The prosecution “makes no sense,” Demand Progress Executive Director David Segal said at the time. “It’s like trying to put someone in jail for allegedly checking too many books out of the library.”
Swartz faced 13 felony charges, including breaching site terms and intending to share downloaded files through peer-to-peer networks, computer fraud, wire fraud, obtaining information from a protected computer, and criminal forfeiture.JSTOR announced this week that it would make more than 4.5 million articles publicly available for free.Swartz’s funeral is scheduled for Tuesday in Highland Park, Ill. | 科技 |
2016-40/3983/en_head.json.gz/12213 | SpaceX Successfully Launches Falcon 9 Rocket, Dragon Spacecraft; Heads Towards ISS
52 comment(s) - last by delphinus100.. on May 25 at 8:59 PM
Dragon capsule is on its way to the ISS
After scrubbing its planned launch on May 19 due to a faulty check valve, SpaceX proved its critics wrong this morning by successfully launching the Falcon 9 rocket with the Dragon capsule perched atop. The momentous launch took place at 3:44am EST this morning from Cape Canaveral, Florida.
SpaceX CEO Elon Musk -- also known for his ventures with Tesla Motors -- was understandably ecstatic about the success, and expressed his joy on Twitter, stating, “Falcon flew perfectly!! Dragon in orbit, comm locked and solar arrays active!! Feels like a giant weight just came off my back.”
With its solar arrays deployed, the Dragon capsule is on its way to the International Space Station (ISS) and should dock with the station on Friday.
Artist's rendition of the Dragon space capsule in orbit with its solar arrays deployed [Source: SpaceX]
John P. Holdren, Assistant to the President for Science and Technology, issued this statement on behalf of The White House regarding the launch:
Congratulations to the teams at SpaceX and NASA for this morning’s successful launch of the Falcon 9 rocket from Cape Canaveral Air Force Station in Florida. Every launch into space is a thrilling event, but this one is especially exciting because it represents the potential of a new era in American spaceflight. Partnering with U.S. companies such as SpaceX to provide cargo and eventually crew service to the International Space Station is a cornerstone of the President’s plan for maintaining America’s leadership in space. This expanded role for the private sector will free up more of NASA’s resources to do what NASA does best -- tackle the most demanding technological challenges in space, including those of human space flight beyond low Earth orbit. I could not be more proud of our NASA and SpaceX scientists and engineers, and I look forward to following this and many more missions like it.
This marks the first time that a privately funded mission has made its way to the ISS. Upon successful completion of the mission, SpaceX will secure a lucrative $1.6 billion contract with NASA under which it will make 12 deliveries to the ISS.
Sources: SpaceX, WhiteHouse.gov Comments Threshold -1
RE: W0000t!
kattanna
*claps* Parent
SpaceX Expected to Launch Dragon Capsule to ISS at 3:44am Tuesday Morning
Tesla Model S Arrives Ahead of Schedule, Stock Surges
UPDATE: SpaceX Becomes First Private Company to Put Capsule in Orbit
Report: Obama's 2011 NASA Budget Funds Private Companies to Ferry Astronauts to Space | 科技 |
2016-40/3983/en_head.json.gz/12218 | Battery Recalls to Cause Global Shortages
Tuan Nguyen (Blog) - October 16, 2006 3:56 PM
30 comment(s) - last by Le Québécois.. on Oct 17 at 5:54 PM
And prices to rise
Over the last several months, a number of top tier companies have been asking customers to return batteries for replacement due to safety concerns. If you have been following the news on DailyTech, a total of roughly 8 million batteries have been recalled worldwide, all of which are manufactured by Sony.However, there is good news and now there is bad news. The good news is that consumers are being protected from hazards that could cause severe damages or even life threatening situations. Some batteries were found to set laptops on fire. Fortunately, companies were quick to take action and batteries were swapped rather quickly. The bad news is that so many batteries were recalled and not enough were replaced, causing a global shortage of batteries.Analysts are saying that battery supply is currently at critically low levels on a global scale. Despite being the world's largest lithium cell manufacturer, Sanyo does not have the capacity to supply replacement orders. Samsung SDI Co. also manufactures lithium cells but it too is running low on supply. Nexcell Battery, a Taiwan-based battery manufacturer that produces batteries from cells supplied by Sanyo and Sony said that cells are now very difficult to come by. Eric Lai, manager at Nexcell Battery said "if we ask for small amounts, we might be able to get supply, but if we order large amounts of more than 2000 cells then you can forget about it."Because of the global shortage, battery prices are also on the rise. According to analysts, prices have jumped as much as 15 percent. IBM, Apple, Lenovo, Hitachi, Toshiba, Dell, Fujitsu, Sharp and many other companies worldwide announced recalls over the last several months. In fact, it's reported that Sony is in the process of destroying over 43 million cells as part of the overall recall. This is as much as 10.8 million batteries said Eric Yu, manager at ETI Pack in Taiwan.Sony itself has not revealed publicly how many companies in total have recalled batteries that used its cells, and the company is also tight lipped about how much money this recall is costing. So far however, Sony has budgeted at least $251 million for the recall project. Sony now faces stiffer competition from rivals. LG Chem Ltd., South Korea's largest battery manufacturer gained several new customers that were previously ordering from Sony. Celxpert Energy Corp., a supplier for Acer and HP said "we originally bought 30 percent of our battery cells from Sony but have lowered that to almost zero because of quality concerns."
RE: What is the big deal?
quote: What is the big deal? When Dell made the initial recall of ~4 million units...
quote: At the moment, this looks like the largest battery recall in the history of the electronics industry, said Roger Kay, an analyst with Endpoint Technologies Associates. "The scale of it is phenomenal." ...and the number is now doubled. I'd say that's a big deal. Parent
Sony Prepares for More Battery Recalls
IEEE to Create New Battery Standard | 科技 |
2016-40/3983/en_head.json.gz/12229 | Home > Encyclopedia of Science Stoney, (George) Johnstone (1826–1911) Tweet Irish physicist who coined the name "electron" for the fundamental unit of electricity (later given to the negatively-charged elementary particle found inside atoms), did important work on the nature of the Sun, and applied the kinetic theory of gases to the analysis of planetary atmospheres, which had important implications for the habitability of other worlds. Using the kinetic theory, which treats gases as collections of randomly moving particles, Stoney was able to draw conclusions about which gases would tend to escape from the atmosphere of a given planet or satellite. Beginning in 1870, Stoney presented a series of papers to the Royal Dublin Society in which he (1) accounted for the Moon's lack of atmosphere in terms of the low lunar gravity, (2) explained the absence of hydrogen from the Earth's atmosphere in terms of the high average velocity of hydrogen molecules, and (3) concluded that probably no water exists on Mars (see Mars, water). It was not until 1898 that these results were published, together with other conclusions about the remaining objects in the Solar System, including that Jupiter can retain all known gases (see gas giant), whereas most moons and all asteroids are devoid of an atmosphere. Related category
• PHYSICISTS Home • About • Copyright © The Worlds of David Darling • Encyclopedia of Alternative Energy • Contact | 科技 |
2016-40/4016/en_head.json.gz/3983 | Polymer-based sensors may lead to implantable medical devices
Amy Higgins
A closeup of flexible, biocompatible polymer molded from porous silicon.Researchers at the University of California, San Diego discovered how to transfer the optical properties of silicon-crystal sensors to plastic. This could lead to flexible, implantable devices able to monitor the delivery of drugs within the body, strains on a weak joint, or the healing of sutures."While silicon has many benefits, it has downsides as well," says Michael Sailor, UCSD professor of chemistry. "It's not particularly biocompatible or flexible, and it can corrode. You need something that possesses all three traits for medical applications."Researchers treat a silicon wafer with an electrochemical etch, producing a porous silicon chip with a precise array of nanometer-sized holes. This gives the chip optical properties similar to photonic crystals -- a crystal with a periodic structure that can control the transmission of light much as a semiconductor controls the transmission of electrons.Molten or dissolved plastic is cast into the pores of the finished silicon photonic chip. The chip mold dissolves, leaving behind a flexible biocompatible "replica" of the porous silicon chip. "It's essentially a similar process to the one used in making a plastic toy from a mold," explains Sailor. "But what's left behind in our method is a flexible, biocompatible nanostructure with the properties of a photonic crystal." The properties of the porous silicon let Sailor's team "tune" their sensors to reflect over a wide range of wavelengths, some of which are not absorbed by human tissue. Scientists can fabricate polymers to respond to specific wavelengths that penetrate deep within the body.To demonstrate how this process would work in a drug-delivery simulation, researchers created a polylactic-acid sensor impregnated with caffeine. The polymer was then dissolved in a solution that mimicked body fluids. Researchers found that the absorption spectrum of the polymer decayed with the increase of caffeine in the "body fluid" solution. "The artificial color code that's embedded in the material can be read through human tissue and provides a noninvasive means of monitoring the status of the fixture," says Sailor. "Such polymers could be used as drug delivery materials, in which the color provides a surrogate measure of the amount of drug remaining," he adds.
Source URL: http://machinedesign.com/news/polymer-based-sensors-may-lead-implantable-medical-devices | 科技 |
2016-40/4016/en_head.json.gz/3994 | 8 Amazing Emerging Technologies From 2011
By Pete Pachal2011-12-30 16:05:26 UTC
1. Networking via LED
Wi-Fi jammed? It won't be a problem if you're networking through your room lights. You heard right — scientists at the Fraunhofer Institute for Telecommunications in Germany worked out a way to transmit data via normal LED light bulbs. Best of all, you can still use them for lighting, since the lights blink on and off too fast for the naked eye to see.
How fast? Quick enough to spew out 800 megabits per second (Mbps) — an impressive spec by home-networking standards. Only a few components are needed to upgrade a typical LED to become a networking illuminator (a term sure to be trademarked any second), which can reach an area of 10 square meters. One drawback: anyone walking between the light and your device will kill the connection.
In another category breakthrough this year, GiiNii also introduced a light bulb that doubles as a speaker.
2. Terminator Contact Lens
What if there were a way to get visual alerts without even having to look at a device? That's the promise of a high-tech contact lens that's also a heads-up display. Researchers at the University of Washington developed a lens that includes all the electronics for displaying visual information to the wearer and still remain completely safe to eye tissue. There's literally just one limitation: it can only display a single pixel. The proof-of-concept could lead to more advanced systems, though, and someday soon you may be able to order your own pair of augmented-reality contacts, ready to display information from Google or Wikipedia on whatever you look at, or immerse you live-action gameplay.
3. Building Better Batteries
Several groups of researchers were working to help create a future where we don't have to plug our gadgets in every night, and when we do, they won't take long to charge. The Berkeley National Laboratory invented a new polymer so they could create a battery that holds 30% more charge than the lithium-ion batteries of today. Other lab coats at the University of Illinois developed a technique that might allow high-capacity batteries to charge and discharge within seconds. A grad student at Stanford is working on building batteries that could be recharged for decades without losing capacity. Other research at Stanford opens the door to transparent batteries that could be used in see-through gadgets. Put it all together and you have the high-capacity, instant-charging, long-life, transparent superbattery of the future. Maybe.
4. The PlayStation Holodeck
Star Trek's holodeck looks like a complete fantasy, but Sony created a convincing video that would make you think twice. Allegedly using no editing or post-production whatsoever, Sony Europe got a couple of London-based production companies to shoot a series of amazing videos that appear to create a holodeck-like experience. Is this the future of augmented reality?
5. Gadgets You Can Bend
One problem with today's touchscreen-heavy tech is that it's fairly fragile (just ask anyone with a cracked iPhone). That would change if the screen wasn't just a rigid piece of glass, but a bendable display. Nokia and Samsung have hinted at bendable phones, and one inventor at in Canada has shown it can be done. 6. Full Duplexing: A Path to 5G
More spectrum, more spectrum, more spectrum — that's the incessant cry of the wireless industry, and its solution to many problems it faces. The plea for more airwaves is valid, though the carriers could do a lot more with what they have now if they can make something of what Rice University researchers have built. The team managed to achieve full duplexing — effectively doubling the amount of data transmitted over a network — and they did it using current equipment. Although new wireless standards need to be developed to use the tech, it could be deployed quickly, with existing cell towers.
7. Smudge-Proof Screens
Could touch screens of the future be immune to fingerprints? That's the promise of a new kind of screen coating that repels oil-based substances. The German scientists behind the discovery were trying to make a new kind of eyeglass, but it could also lead to iPads that stay spotless — assuming they can make the antismudge coating scratch-proof, too.
8. Social Cloud Computing
Cloud computing — using several computers over an open network to combine their power to attack difficult computing tasks, like the SETI@Home project — has a lot of potential, but one big problem is if someone on the network is malicious, it can screw up the whole operation. It's difficult to know who's a bad guy, though, so information that's even a little bit sensitive will never work on the model.
Or will it? Social networks like Facebook can provide a large group of like-minded parties, all of whom have a certain level of trust. After all, you're already sharing the intimate moments of your life with these people — why not some processing power, too? That's the essence of social cloud computing, an idea from researchers at the University of Montana. And you thought you were just hanging out.
Big-ticket product launches like the Galaxy Nexus and Kindle Fire got gobs of attention this year. But between the marquee product unveilings there were even better stories — telltale hints of what kind of tech might be in products five, 10 or 15 years out. The field of emerging technology let us sneak a peek at the wonders of the future.
It's just a potential future, of course. One with lots of promise, but a lot of things need to happen for a breakthrough in a lab to become a mainstream product. Quirks need to get ironed out, money needs to be spent, and early adopters need to buy it — among a host of other variables. If even a single one of them doesn't happen, it's kaput for any tech, no matter how good.
Let's hope that doesn't happen to much of the future technology glimpsed in 2011, because it's been a fantastic year for breakthroughs. From ultra-convenient wireless networks to superior batteries to gadgets you can bend, many of the emerging technologies from the past 12 months have the potential to change entire industries, if not the world.
We've culled from the long list of contenders to highlight our favorites from the most promising emerging tech of the past year. Besides sheer impressiveness, a truly bleeding-edge tech has to have at least some modicum of attainability, and one of the best qualities of some of the candidates on our list is that they use existing systems. No tech is so great that it's worth any cost.
Here are Mashable's picks for the top emerging technologies of 2011.
Social Cloud Computing image courtesy of iStockphoto, alexsl
breakthrough, emerging tech, research, Tech | 科技 |
2016-40/4016/en_head.json.gz/4012 | A Report on Non-Ionizing Radiation
News CenterMain Articles Archive
Short Takes Archive
Print Issues Archive
Papers & Articles Archive
EMF/EMR Meters
EMF/EMR Directory
EMF Exposures in the Womb Can Lead to Childhood Obesity
Kaiser’s De-Kun Li Second Prospective Study
De-Kun Li is the last man standing. Not long ago, many of the leading environmental epidemiologists in the U.S. were working on EMFs of one kind or another. They've all moved on —all except De-Kun Li, and he continues to break new ground in one study after another.
Li, a senior researcher at Kaiser Permanente in Oakland, CA, has now shown that EMF exposures in the womb are linked to an increased risk of childhood obesity.
"Maternal exposure to high [magnetic fields] during pregnancy may be a new and previously unknown factor contributing to the world-wide epidemic of childhood obesity/overweight," Li writes in a paper posted today by Scientific Reports, a peer-reviewed, open access journal owned by the group that publishes Nature.
Last year at this time, Li published a paper that pointed to an association between prenatal magnetic field exposure and childhood asthma (see also our posts on August 1 and August 17, 2011). The obesity study, like the one on asthma, has a prospective design —both began during the California EMF program in the 1990s (see MWN, M/J01, p.1 and J/A02, p.1). At the time, Li measured the EMF exposures of the women while they were pregnant for a study of EMFs and miscarriages. He then monitored the weight of their children up through their 13th birthday. These are the only two prospective epidemiological studies ever done on EMFs.
Li has now documented an association between two major public health problems among children: Obesity affects about one-fifth of all American children and asthma is the most prevalent chronic childhood disease. Four years ago, Li found that the long-term decline in the quality of human sperm could also, at least partially, be attributed to magnetic field exposures. A decade ago, Li showed that pregnant women exposed to EMFs above a certain threshold (16 mG) had higher rates of miscarriages (see MWN, M/J01, p.1).
"We should definitely not be ignoring the potentially serious health impacts of exposure to EMFs," Li told Microwave News.
In Some Cases, the Risk of Obesity Is More Than Six Times Higher
In this new study, Li found that children of women who were exposed to magnetic fields of more than 2.5 mG (0.25 µT) for at least 10% of the day (2.4 hours) while pregnant had close to twice the risk of becoming overweight or obese compared to those exposed to 1.5 mG or less. This is a statistically significant finding.
When Li limited the analysis to those children with the most detailed follow-ups (11 years or more), the risk rose to close to three times the expected rate of obesity (also significant). And for those children that were "persistently" obese —that is, children who were overweight most of the times they were checked— the risk was even greater: five times higher for maternal exposures above 1.5 mG and more than six times higher above 2.5 mG, both compared to those women who were exposed to 1.5 mG or less.
For all these associations, Li saw a dose-response relationship. That is, the risk got bigger as EMF exposure increased. Li calls the dose-response for the risk of persistent obesity as being particularly "strong."
Sam Milham, a veteran EMF epidemiologist now officially retired but still very active in the field, is not surprised by Li's new finding. "I predicted this," he said in an interview from his home in Olympia, WA. "Childhood obesity is unheard of among the Amish and I believe that at least part of the reason is that they don’t have electrical service in their homes, they don't drive cars and don’t use cell phones," Milham said. "Amish children also have very little asthma and diabetes," he added.
When asked about a possible link to diabetes, Li replied that, "The number of children [in our study] with diabetes so far is too small to examine, but we intend to follow up on this."
In the paper, Li notes that these findings, taken as a whole, make "biological sense" and point to an "underlying association." No one should be surprised that an environmental exposure during pregnancy could lead to adverse effects on multiple organ systems, he told us: "This applies not only to magnetic fields but to many other agents, for example, some chemical exposures during pregnancy can cause multiple birth defects."
An EMF effect on the developing fetus gained credibility earlier this year when a team at Yale medical school, led by Hugh Taylor, showed that mice exposed in utero to high frequency EMFs —from cell phones— developed neurological and behavioral problems by the time they became adults.
In his paper, also published in Scientific Reports, Taylor wrote:
“During critical windows in neurogenesis the brain is susceptible to numerous environmental insults, common medically relevant exposures include ionizing radiation, alcohol, tobacco, drugs and stress. … Even small exposures during periods of neurogenesis have a more profound effect than exposure as an adult.”
In an interview, Taylor told Microwave News that he did not see obesity in the mice he exposed to RF radiation. "It could be because we used a different frequency and, of course, we used a different species," he said. On the other hand, he added, "It makes a lot of sense theoretically because one of the areas we saw affected in the brain —the hypothalamus— leads to changes in appetite and eating behavior."
Prospective vs. Retrospective Epidemiological Studies
Paradoxically, as Li has attributed a growing number of health conditions to magnetic field exposures, a number of leading epidemiologists who have themselves linked magnetic fields to childhood cancer have put distance between themselves and their own findings.
David Savitz is a case in point. Savitz has spent much of his career working on EMF epidemiology. Twenty-five years ago, he was the first to repeat Nancy Wertheimer and Ed Leeper's landmark study showing that children living near power lines had higher rates of leukemia (see MWN, N/D86, p.1). A decade later, in a major study of utility workers for EPRI, Savitz saw an increased risk of brain tumors (see MWN, J/F95, p.1). But he later repudiated most of those findings. Last year when Li's asthma study was published, Savitz said that he doubted that magnetic fields could cause "any health effects at any reasonable levels."
When we asked Li about this, he replied that the answer probably lies in the differences between prospective and retrospective epidemiological studies. In a prospective study, the population is followed in real time and exposures are measured as they occur. In a retrospective study, epidemiologists attempt to estimate exposures and conditions that occurred in the past, sometimes many years later. The Wertheimer-Leeper and Savitz studies used a retrospective design, as have all the others except Li's on asthma and obesity.
"EMF health effects can probably only be examined effectively with a prospective design," Li told us. "Although Nancy Wertheimer was lucky enough to discover a health effect using crude retrospective measures of EMF levels, luck can't easily be repeated. EMF exposures are very hard to measure retrospectively. Grossly inaccurate measures of exposure tend to mask an underlying association. This is just Epidemiology 101."
"In addition, most studies of EMF health effects have focused on cancer and cancer usually has a long latency period, " Li said. "To retrospectively measure EMF exposure 10-20 years before the diagnosis of cancer is extremely difficult, if not impossible."
De-Kun Li, obesity, asthma, Sam Milham, David Savitz, childhood leukemia, Hugh Taylor, Nancy Wertheimer, Ed Leeper, fertility, Microwave News The Web
Return to Main Article Archive
View Articles by Year
Help SupportMicrowave News
© Copyright Microwave News 2003-2016. All Rights Reserved. | 科技 |
2016-40/4016/en_head.json.gz/4016 | Hot, hot, hot
By Jason B. Smith
Points Between
Pity the poor clown.
British performer Barney Baloney has been banned from making balloon animals because some children may be allergic to latex, according to an AFP report earlier this week.
The latest ban is just part of what seems to be an assault on the painted dude's act. He's also been asked to stop making balloon guns (although swords are still apparently OK). He's also had to stop using a bubble machine because a child could slip.
The latest latex ban came from a supermarket, whose spokesman said it was all about the welfare of children.
To save one or two kids from breaking out, we'll spoil the fun for the rest of them. Wouldn't you think the parents of these kids would keep them away from the balloon guy?
Oh wait. That would require a degree of parental responsibility. And we all know how popular a concept that is in today's world.
Meanwhile, there's one set of parents I'm not worried about. Kristopher and Priscilla Wells welcomed Ephraim Alexander into the world last Friday.
For all his nervousness in the months leading up to Ephraim's arrival, Kristopher will make a great dad. He is firmly rooted in his faith, just as his wife, parents, sister and other family members.
As long as he can control the influence of a certain "uncle," Ephraim will be just fine.
Speaking of that "uncle," I'm almost starting to believe Al Gore: Global warming is real. And it's apparently a little angry right now.
With local temperatures crossing the century mark on a daily basis for the last few weeks, I'm becoming more and more of a curmudgeon. I'm cranky. I'm tired all the time. The air conditioner at my house doesn't even start to cool the air during the day. And I'm tired of feeling like I'm going to sweat out a vital organ every time I walk outside.
I know it's been this hot before. Or maybe it hasn't. The Environment Georgia Research & Policy Center claims it's hotter than ever. Other climatology experts say there's no real trend. And NASA recently revamped its climate data to show the hottest year ever was more than 70 years ago. But you know what people say about opinions: Throw enough state and federal funding at a subject, and you can get whatever opinion you want.
Still, I guess it could be worse. Although, quite honestly, I'm still trying to figure that scenario out. But I'm sure it involves an army of depressed clowns battling fanatical latex haters with balloon guns and inflatable swords.
Or maybe my brain has been finally cooked by the heat.
Web posted on Thursday, August 16, 2007 | 科技 |
2016-40/4016/en_head.json.gz/4065 | Study: Low-emissions vehicles are less expensive overall Engineer, explain thyself Barnhart, Stuopis establish medical leave and hospitalization policy review committee Researchers find explanation for interacting giant, hidden ocean waves Innovation for everyone Nanosensors could help determine tumors’ ability to remodel tissue Algorithm could enable visible-light-based imaging for medical devices, autonomous vehicles Scientists identify neurons devoted to social memory By Topic
A tribute to MIT's Howard Johnson
David Warsh, Boston Globe Columnist
This story ran on page E1 of the Boston Globe on June 1, 1999.
It is a dangerous thing to describe a single man as having decisively shaped present-day Boston. How can any one individual be said to have had a pervasive impact on so various and complicated a place?
But if there is such a person, his name is Howard Johnson, former president and chairman of the Massachusetts Institute of Technology. And though the results of myriad little decisions as a leader of corporate boards, foundations, museums and government agencies add up to influence of remarkable force -- more of which in a moment -- he is best remembered for One Big Thing.
At a critical time in the late 1960s, Johnson stood up to the forces of campus rebellion at MIT. Many university presidents were destroyed by the troubles. Only Edward Levi, University of Chicago president, had comparable success guiding his institution to a position of greater strength and unity after the turmoil.
Levi went on to serve as US attorney general in the aftermath of Watergate. Johnson exerted his influence in more subtle ways. Both men came to symbolize the power of the center to build consensus: for increased participation by women and minorities, and heightened environmental awareness. And as improbable as it is for a man who moved so effectively behind the scenes for 40 years, now Johnson has written a book, Holding the Center: Memoirs of a Life in Higher Education.
It has a foreward by John Reed, the MIT alum who runs Citicorp. A leitmotif concerns Johnson's extraordinarily successful collaborations with General Motors's Alfred Sloan, Dupont's Crawford Greenewalt, and the Lazarus family of Federated Department Stores.
And though the story of the tumult of the 1960s are the hinge upon which the book turns, its highlights are the parts of the narrative before and after. Its most valuable contents may be a pair of short lists of maxims -- themselves worth the asking price for many managers.
Like many of his generation (he was born to a Scandinavian family on Chicago's South Side in 1922), Johnson was heavily influenced by his experiences during World War II -- in his case, service in the First European Civil Affairs Regiment, an Anglo-American unit sent in to southern France in the summer of 1944 to govern Vichy France.
After the 23-year-old "Maire de Montpellier" mustered out, it was only a few short steps to the University of Chicago Business School; then, in 1954, to the School of Industrial Management at MIT. By 1959 he was dean. When in 1965 he accepted a six-figure offer to become executive vice president of Federated, MIT made him president instead.
During his tenure, Johnson offered no bluster, only reason. He stood up with equal perpendicularity to long-haired rads and the hectoring columnist Joseph Alsop. When presidents elsewhere were being forced into early retirement, he stayed on to pick Jerome Wiesner, Paul Gray and Charles Vest as his inheritors. Even Harvard sought his counsel (though he doesn't mention it); but his friend Robert Solow declined the offer.
All the while, Johnson was exercising power in other venues. It was in 1968, for instance, that, as a director of the Federal Reserve Bank of Boston, he preferred Frank Morris to Paul Volcker as president, and led the fight to circumvent the Board of Fed Governors' wish that a new bank be located on the outskirts of town, preferably on I-495.
Morris served the region for 20 highly successful years. And the presence of a new Fed building across from South Station created a new axis for the city, which promptly began to grow eastward to the water.
Johnson also was the driving force behind MIT's decision to create the Whitehead Institute -- the development which, more than any other in the last 30 years, has assured Boston's place in the front ranks of biotech research.
In the years when he ran MIT, Johnson was frequently asked whether he was related to the real Howard Johnson -- that is, the ice cream magnate, who franchised his orange roofs and "28 flavors" for considerable fame and wealth. A friend introduced the two men, and they began dining out with their wives occasionally at the Ritz.
At last the restaurateur returned from one of his ocean voyages to report disconsolately that he had been asked whether he were related to the real Howard Johnson -- the famous university president.
Reprinted courtesy of the Boston Globe.
Topics: History of MIT | 科技 |
2016-40/4016/en_head.json.gz/4066 | Nanosensors could help determine tumors’ ability to remodel tissue Algorithm could enable visible-light-based imaging for medical devices, autonomous vehicles Scientists identify neurons devoted to social memory Collaborating with community colleges to innovate educational technology Engaging industry in addressing climate change Making smarter decisions about classroom technologies From engineer to urban planner Professor Emeritus Ali Javan, inventor of the first gas laser, dies at 89 By Topic
Full Screen Bringing the law to the factory
While factory labor rules are notoriously hard to enforce, a new study shows how some inspectors are able to uphold workplace standards.
Peter Dizikes, MIT News Office
Andrew CarleenEmail: [email protected]: 617-253-1682MIT News Office
The recent factory collapse in Bangladesh has renewed attention to the global issue of workplace standards. In many countries, similar problems have arisen from a lack of enforcement for existing laws pertaining to safety, wages and overtime, or an absence of labor contracts for workers.
These problems occur for a variety of reasons, including a lack of funding for regulators; difficulties acquiring solid information about potential problems in the first place; and corruption in the enforcement process.
“We have a huge problem with enforcement,” says Matthew Amengual, an assistant professor of management at the MIT Sloan School of Management, who has studied the issue in multiple countries.
Now, after extensive on-the-ground research in Argentina, Amengual has a distinctive new paper on the subject that will be published in Industrial and Labor Relations Review. In the paper, titled “Pathways to Enforcement,” Amengual details his findings, including the observation that effective regulators can overcome the constraints on their work by creating informal fact-finding alliances with advocacy groups and others who may have unique information about dubious labor practices. Or, as Amengual writes in the paper, “Regulators can use these groups to make up for state weakness.” So while some people may view effective regulation as a process that needs to be free of political influence, Amengual has discovered that just isn’t so: For better or worse, underfunded regulators will try to connect with outside groups where it is politically feasible, and those alliances tend to generate the instances where laws are firmly enforced. Reducing political influence on the inspection process, he notes, is not the only way to increase enforcement.
Explaining enforcement
Indeed, follow-up on regulations is so rare, Amengual says, that academic researchers essentially need to examine why it happens at all, and what it consists of when it does take place. Or, as he says, “How do we explain enforcement in areas where we don’t expect it to occur?”
Argentina provides a good window into this question because it has a set of national regulations, but a regional system of enforcement. That provided Amengual with the opportunity to examine the differences in enforcement that occur among and within regions. He spent 16 months conducting research in Argentina in 2008 and 2009, interviewing more than 190 government officials, inspectors, labor leaders and firm managers; conducting an original survey of labor inspectors; studying government data and documents; and even reviewing about 1,400 local newspaper articles about labor regulation. In the paper, Amengual details the enforcement process in two provinces, based around the cities of Córdoba and Buenos Aires, in which enforcement varied from industry to industry. In Córdoba, the Labor Secretariat was charged with enforcing regulations, and conducted a relatively large number of inspections — higher than the number per capita in France, for instance — but those investigations were uneven. In the informal brick-kiln industry, where workers form mud into bricks that are then baked in small ovens, labor problems had been publicized in the local press, but few enforcement actions were taken. In the larger-scale metal industry, workplace problems appear to have been less extreme, but the regulators took far more action. Why? Local unions were stronger in the metal industry, and could provide inspectors with copious amounts of information on labor standards. In Córdoba, as Amengual writes, “there is enforcement, but it is skewed toward industries with large numbers of union demands.” A related story holds in Buenos Aires, with a few twists. There, regulators had fewer ties with the metal industry unions, and did not investigate the industry as fully as their counterparts in Córdoba — but established a strong working relationship with an advocacy group, La Alameda, that supported Bolivian immigrants in the garment industry. The enforcement that followed was heavily oriented toward violations pertaining to the garment-makers.
Such working relationships were necessary because informal sweatshops are not necessarily obvious from the outside. Rather than, say, trying to study electricity-usage records to deduce where off-the-books production was occurring, inspectors could get information from the advocacy groups that had direct ties to workers. “This allowed them to find highly vulnerable workers they never would have been able to locate,” Amengual says. Or, as one senior official in charge of inspectors told Amengual about how he approached his job: “I went looking for allies.”
The pitfalls of outside help
Other scholars in the field are impressed by the study. “The paper is creative and in some ways it’s path-breaking,” says Janice Fine, an associate professor in Rutgers University’s School of Management and Labor Relations. Amengual, she adds, “has really become an important voice in the debate about labor standards enforcement,” by meticulously documenting the on-the-ground strategies inspectors have been using and the generally overlooked tactic of forging alliances with groups in civil society.
“People haven’t really been thinking about it this way,” Fine adds. Amengual emphasizes that he is not offering the tactics of the Argentine regulators as a one-size-fits-all prescription for better factory-law enforcement around the globe. After all, the pattern he discovered — of regulators seeking information from outside sources — could be used to justify keeping enforcement budgets small, and it does represent a politicized process of enforcement. “This shouldn’t be a replacement for strengthening bureaucracies,” says Amengual, who is turning his research into a book about industry and regulation, including workplace and environmental issues.
At the same time, Amengual does not endorse leaving enforcement in the hands of companies themselves, as some would prefer. Amengual is among many MIT-based scholars who have conducted research on the subject with Richard Locke, the former head of MIT’s Department of Political Science and a former associate dean at MIT Sloan, who once advocated for self-enforcement among firms, but has since come to view state action as an essential part of safe factory conditions. “We need the state to do this,” Amengual agrees.
Topics: Business, Business and management, Management, Manufacturing, Labor, Political science
Matthew AmengualMIT Sloan School of ManagementARCHIVE: "How to make factory conditions better" About This Website | 科技 |
2016-40/4016/en_head.json.gz/4067 | http://news.nationalgeographic.com/news/2013/13/130515-chytrid-fungus-origin-african-clawed-frog-science.html
African Clawed Frog Spreads Deadly Amphibian Fungus
Global trade, scientific and medical advancements contributed to the spread of a deadly amphibian disease.
An African clawed frog, Xenopus laevis.
Photograph by Joel Sartore, National Geographic
Apocalyptic, catastrophic, devastating: All words used to describe chytrid fungus infections that are wiping out amphibians around the world, including hundreds of frog and salamander species.
The World Is Finally Getting Serious About Tiger Farms How Human Violence Stacks Up Against Other Killer Animals The World’s Most Trafficked Mammal Just Got Desperately Needed Help “It did a really huge number on an entire genus of frogs in Central America,” said Marm Kilpatrick, a disease ecologist at the University of California, Santa Cruz (UCSC). The fungus probably caused several species of this harlequin frog (Atelopus) to go extinct, he added. (Related:“Endangered Frogs Get Helping Hand.”)
Chytrid is also largely responsible for endangering California’s mountain yellow-legged frog (Rana muscosa).
"It's the single biggest threat to vertebrate diversity in the world," Kilpatrick said. (Related: "30 Amphibian Species Wiped Out in Panama Forest.")
The fungus, which seems to attack only amphibians, causes a thickening of the infected amphibian’s skin, preventing the animal from breathing properly and interfering with its electrolyte balance. The infection can eventually lead to cardiac arrest, although some frog species are better able to cope with it than others.
A new study delving into how this fungus spreads has now linked chytrid outbreaks in California—one of the more recent areas experiencing huge amphibian die-offs—to the spread of the African clawed frog (Xenopus laevis).
And the study’s implications could extend far beyond California, providing scientists with a potential road map showing how a devastating infection continues to spread around the world.
Until now, direct evidence of the chytrid fungus in African clawed frogs in regions of the world that have seen big amphibian die-offs has been missing, write the authors of the new study.
"I was surprised that nobody did this study before us, actually," said Vance Vredenburg, a conservation biologist at San Francisco State University and lead author of the new study, published May 15 in the journal PLoS ONE.
That could be because labs like his have been in crisis mode, scrambling to find a way to combat the fungus and save as many amphibians as possible, rather than trying to parse chytrid’s origins.
A Questionable Path
Chytrid's origins and how it spread have long been a big unanswered question for researchers, said Kilpatrick, who was not involved in the study.
Researchers in South Africa first proposed in 2004 that the African clawed frog was responsible for the spread of chytrid fungus around the world, said Vredenburg.
That earlier study suggested that the spread of chytrid was aided by the pet trade in African clawed frogs and by the animal’s widespread use as a research animal. Until the 1970s, the frog was also used in many hospitals as an indicator of human pregnancy; injecting the urine of a pregnant woman into the frog caused it to lay eggs.
Individual frogs that escaped or were released into the wild by hospital workers or pet owners may have carried the chytrid fungus, introducing the pathogen to new habitats around the world.
New Techniques
But a new technique developed by Vredenburg in 2011 allowed researchers to quickly evaluate whether amphibians, preserved as museum specimens, had the chytrid fungus or not.
Scientists could quickly swab the skin of an amphibian, analyze any DNA they picked up, and determine whether the chytrid fungus had infected the animal.
In what Vredenburg called the “old-school” way of testing for the fungus, analysis of a single tadpole required students in his lab to examine 200 skin samples under a microscope.
The new technique enabled Vredenburg’s research team to test 201 preserved frogs in the genus that includes the African clawed frog, collected from Africa and California between 1871 and 2010. The specimens, housed at the California Academy of Sciences in San Francisco, were all caught in the wild.
The researchers confirmed that wild specimens of the African clawed frog did indeed carry the chytrid fungus, yielding direct evidence of infection in this species outside of Africa.
That confirmation, combined with correlations between recorded instances of African clawed frogs around California and outbreaks of chytrid fungal infections, brings researchers one step closer to figuring out how this deadly infection became a global scourge.
But the blame for chyrid’s spread might not fall squarely on the African clawed frog, Kilpatrick cautioned.
The American bullfrog (Rana catesbeiana) is a popular source of frog legs served in restaurants around the world. And their movements can also be correlated to the spread of the chytrid fungus.
"The trade or movement of those two species has been responsible for the spread of [chytrid]," Kilpatrick said.
This pathogen could also have been present around the world in a nonlethal form, said Anna Savage, an evolutionary geneticist with the Smithsonian Conservation Biology Institute, who was not involved in the study.
Perhaps something in the environment changed, so that amphibians were no longer able to withstand the chytrid fungus, explained Savage, a former National Geographic grantee.
We’re still trying to compile basic information on this pathogen and how hosts, such as frogs, respond to it, she said. “So just knowing where it came from and how it spread, that’s really important information in terms of management strategies in dealing with this.”
Vredenburg is currently working on building a living library of beneficial bacteria that could help amphibians around the world combat chytrid infections. (Related: "Amphibian Bacteria Fights Off Deadly Fungus, Study Says.")
His team is working to isolate bacteria native to amphibian populations that help some species resist chytrid. They're hoping to culture those beneficial bacteria and dose any infected populations, giving their systems a boost to help fight off the fungus.
"I'm desperate," Vredenburg said. "I don't want to see any more massive die-offs of frogs. I'm done with that."
© 1996-2016 National Geographic Society. | 科技 |
2016-40/4016/en_head.json.gz/4074 | e-Newsletters > NewsBreaks
Back Index Forward Expanding Options for Ebooks by
Paula J. Hane Posted On April 2, 2012
The market for ebooks continues to evolve so quickly that it’s a challenge just keeping up with announcements—new book-related startups, ebookstores, ebook production technology, subscription platforms, and etextbook platforms. Publishers, distributors, and readers all have their share of challenges in dealing with the new technologies, tools, and formats. Here are some of the recently launched services that are aiming to redefine the book experience.Purchasing/Renting Ebooks With BilbaryTim Coates, a former managing director of bookseller Waterstones, launched Bilbary—an online paid-for library that enables readers to log in and download books onto their computers, phones, or tablets. Offering about 350,000 titles, its first titles are academic books and journals aimed at students, with a broader range of fiction and nonfiction expected to follow. It launched first in the U.S. in March 2012; it has plans to launch later in the U.K.Bilbary plans to serve as a place for book lovers, booksellers, publishers, and librarians to discuss favorites, recommend new titles, and purchase and eventually rent books. While the rental model has been viewed as controversial and a challenge to struggling libraries, Bilbary says it is dedicated to the “preservation of literature and libraries, and has already commenced work with libraries and publishers to develop agreements that can benefit all parties in the long run.”The site is in beta, and the user discovery experience is kludgy at this point. PaidContent reports that “the user browsing experience is going to have to become infinitely better for consumers to be able to use this site in any real way. If that part can be handled, however, the site has high ambitions and is out the door before the oh-so-delayed publisher-backed Bookish has even launched.”Bookish, an ambitious joint venture from Hachette Book Group, Penguin Group (USA), and Simon & Schuster, has been in the works for some time, but it has still not launched—lots of buzz but no delivery. We covered the announcement of the planned site in May 2011 in a NewsBreak. It plans to offer a recommendation and discovery engine plus a bookstore.EtextbooksIn January 2012, Apple unveiled a suite of tools for the education market that it said would “reinvent textbooks.” While the tools are free and have generated a great deal of interest for the area of custom publishing, they continue Apple’s well-known tradition of a closed, proprietary architecture. And, a recent article in the Chronicle of Higher Education reports that Apple’s software, called iBooks Author, lacks easy tools for multiple authors to collaborate on a joint textbook project. Since most books aren’t written in isolation, the article reported on two new publishing platforms that seek to make that group collaboration easier—Booktype and Inkling Habitat.Booktype is free and open source and allows teams of authors to work together in their browsers to write sections of books and chat with each other in real time about revisions. Entire chapters can be imported and moved around by dragging and dropping. The finished product can be published in minutes on e-readers and tablets or exported for on-demand printing. Booktype also comes with community features that let authors create profiles, join groups, and track books through editing.Since Booktype is installed and hosted on a server, all content belongs to the author(s). According to the website, users are free to “give any license to any book. No platform or data lock-in, no hidden fees, no restrictive service agreements.?”Inkling Habitat creates a cloud-based platform for the professional market. The company says it is “the world’s first scalable publishing environment for interactive content.” One of its most important features is real-time, cross-platform content updating. Now, publishers and developers using Inkling Habitat can change content and instantly push just those changes to everyone who owns the impacted title. These quick and simple updates make it easy to keep Inkling content current, free of charge. Inkling books can be read on any device with a browser thanks to a new HTML5 format. Mashable reports that “Inkling was never much interested in making books. What they’ve actually been focused on is a book maker that publishers can use to create interactive books by themselves—a tool that could turn interactive digital book publishing on its head.”The article also points out that “Inkling competitors such as CourseSmart, Kno, and Chegg offer libraries that number in the tens of thousands, but their books usually resemble those on the Kindle: digital versions of their paper counterparts, give or take a highlighting or search feature. While this format is ideal for novels or text-exclusive material, it often falls short for content such as textbooks or instructional material that benefit from interactive diagrams, photos, and videos.”Inkling, which is based in San Francisco, was founded in late 2009 and is backed by Sequoia Capital with significant minority investments from McGraw-Hill and Pearson.Vook PlatformAfter nearly 4 months in private beta testing, Vook launched on March 26 this year a version of its cloud-based ebook production and distribution software. It allows users to embed images, videos, and other multimedia into ebooks to create enhanced ebooks. Vook lets you quickly create, edit, style, and publish an ebook without any special software required. When you distribute through Vook, you keep 100% of your royalties.The Vook platform is meant to be easy enough so that aspiring self-publishers can use it but robust enough for enterprise use. With a Vook premium account, you can distribute your titles through Vook to Amazon, Apple, and Barnes & Noble in one click or download distribution-ready files that you can distribute yourself. The Vook store offers a catalog of mostly consumer titles in a variety of formats with click-through to several purchasing options.Unlike Apple’s iBooks Author, the tool isn’t free to use. For single users, Vook offers pricing that ranges from $79 a month for limited access to basic services, including ebook production, storage, and distribution, to $299 a month for greater access to services. For companies that want to make Vook their ebook publishing platform, enterprise pricing is available. A Boon for Book Lovers?Bookboon.com, which originates from Denmark out of Ventus Publishing, was established in 1988. Ever since its founding, the company has focused on publishing education-related books for business professionals and students. In 2005, the company made a strategic leap and became the “first book publishing company in the world to focus 100% on free ebooks.” Bookboon.com offers a range of more than 1,500 free ebooks for university students, business professionals, and globe trotters—admittedly a fairly limited offering at this point. Books can be downloaded directly in PDF and are currently available in seven languages. Bookboon.com reports it has grown globally by more than 500% when comparing January 2011 to January 2012, and the company expects more than 50 million downloaded ebooks in 2012. Bookboon.com is currently headquartered in London with offices in New York, San Francisco, Shanghai, Paris, Munich, Amsterdam, Copenhagen, and Stockholm. Paula J. Hane is a freelance writer and editor covering the library and information industries. She was formerly Information Today, Inc.’s news bureau chief and editor of NewsBreaks. Email Paula J. Hane
Related Articles 4/25/2011Flat World Knowledge Releases 'Make It Your Own' Platform for Textbooks5/12/2011Bookish Wants to Help You Find Your Next Book7/25/2011CourseSmart Launches HTML5-Based Reader for Etextbooks8/22/2011Textbook Rental Firm Chegg to Add Etextbooks8/22/2011Books & Media Move to the Clouds; Apple, Amazon, and Walmart Fight for Market Share9/29/2011Etextbook Provider CourseSmart Adds Nine New Publishers11/3/2011EPUB 3 Becomes Final IDPF Specification�Poised to Unleash an Econtent Revolution1/26/2012Apple Announces Efforts to �Reinvent� Textbooks4/12/2012Ebook Tsunami�From Antitrust to Burgeoning Sales4/26/2012Bilbary Announces Ebook Partnerships With The State Library of Kansas and Taylor & Francis8/2/2012Vook Launches New Features on Ebook Publishing Platform11/5/2012CourseSmart Launches CourseSmart Subscription Pack12/3/2012VOOK to Offer Interactive Ebook Publishing With WeJITs1/10/2013Ebook Trends 2013�The Transformation Accelerates1/17/2013Ebook Trends 2013�The New World of Ebook Publishing1/24/2013Kno Introduces Kno Me Personal Study Dashboard3/4/2013Etextbook Update3/14/2013EDUCAUSE and Internet2 to Launch Etextbook Pilot3/21/2013SAGE Strikes Gold with Andy Field�s New Statistics Textbook/Ebook4/11/2013Amazon Buys Goodreads to Bolster Its Advice Portfolio8/27/2013Chegg Unveils Updated Mobile App | 科技 |
2016-40/4016/en_head.json.gz/4141 | T-Mobile Taking Steps to Embrace iPhone Community?
The nature of the relationship between T-Mobile and the iPhone is definitely an unusual one. As soon as the first iPhones on AT&T’s network got carrier unlocked, T-Mobile became the go-to home in the States for AT&T expats. In the years that have followed, we’ve seen the iPhone come to Verizon and Sprint, but T-Mobile is still the odd man out. Throughout this time, the company’s attitude towards Apple and the iPhone has been all over the place. It vacillates between praising the iPhone and denigrating it, peppered with the occasional action that seems to actually embrace subscribers who want to bring their iPhones over to the network. It’s the latter we’re talking about today, thanks to the publication of an internal T-Mobile doc that shows the carrier planning to make a new-found effort to support its community of iPhone users.
Starting at the end of the month, T-Mobile will put procedures in place to offer additional support to its iPhone users a group it claims numbers around one million. There’s not much T-Mobile’s going to be able to do to help those users out with being stuck on 2G wireless data speeds, but it still wants to make things as comfortable as possible for subscribers who have chosen to bring their iPhones to its network. That means giving its employees more education on setting up an iPhone with T-Mobile, letting users know what they can expect from the service, and creating official community support pages for the iPhone on its website. Only the iPhone will be getting this special treatment; if you’re interested in bringing another non-T-Mobile unlocked GSM phone to the carrier, you’re own your own.
Source: TmoNews
This post has been tagged with:Apple iOS iPhone 3GS iPhone 4 iPhone 4S News Rumors T-Mobile | 科技 |
2016-40/4016/en_head.json.gz/4163 | How Psychedelics Enhance Cognition, Boost Intelligence, and Expand Cognitive Studies
Thomas B. Roberts PhD
The following is excerpted from The Psychedelic Future of the Mind, published by Inner Traditions, Bear & Company.
Current research offers some tantalizing support for claims that psychedelics can be used to enhance cognition, improve intelligence, and strengthen cognitive studies. Cognitive Enhancement
Experimental evidence of psychedelic cognitive enhancement comes from studies of practical problem solving, abstract concepts, and psychotherapy.
The Sleeping Giant of Psychedelics' Future — ?Innovative Problem Solving A significant instance of problem solving resulted in a Nobel Prize for Kary Mullis. Until the invention of the polymerase chain reaction (PCR), a common problem in biology was that biological samples were often too small to analyze, but Mullis solved that and won a Nobel Prize. He described how LSD aided him in doing so. "PCR's another place where I was down there with the molecules when I discovered it and I wasn't stoned on LSD, but my mind by then had learned how to get down there. I could sit on a DNA molecule and watch the [indistinct] go by. . . . I've learned that partially I would think, and this is again my opinion, through psychedelic drugs . . . if I had not taken LSD ever would I have still been in PCR? I don't know, I doubt it, I seriously doubt it." (Mullis 1998; "Horizon: Psychedelic Science" 1997) From the point of view of psychedelic cognitive studies, Mullis's example is noteworthy because he did not have his insight while taking psychedelics but instead used psychedelics to increase his ability to visualize, then transferred that cognitive skill back to his ordinary mindbody state. This confirms the idea that some skills learned in one state can be transferred to another. Transference and nontransference between mindbody states is itself a cognitive process that deserves study — learning to remember dreams, for example. Learning to increase this flow, if it is possible, would increase access to stores of information and possibly to new cognitive skills.
Unlike Mullis's experience of transferring a skill back to his ordinary state, most instances of psychedelic problem solving occur while the person's cognitive processes are psychedelically augmented. This is most clearly illustrated by "Psychedelic Agents in Creative Problem Solving: A Pilot Study," by Willis Harman, a professor of engineering economic systems, and a team of researchers at Stanford Research Institute. Working with twenty-seven men who were "engaged in various professional occupations, i.e., engineers, physicists, mathematicians, architects, a furniture designer, and a commercial artist and had a total of 44 professional problems they wanted to work on," the Stanford Research Institute team divided them into groups of three or four and gave them 200 milligrams of mescaline, followed by a quiet period of listening to music. Then they had snacks and discussed their problems with their group. Following this they spent three or four hours working alone on their problems. As a result of psychedelic enhancement, the practical results were impressive.
"Pragmatic Utility of Solutions. The practical value of obtained solutions is a check against subjective reports of accomplishment which might be attributable to temporary euphoria. The nature of these solutions was varied; they included: (1) a new approach to the design of a vibratory microtome, (2) a commercial building design accepted by client, (3) space probe experiments devised to measure solar properties, (4) design of a linear electron accelerator beam-steering device, (5) engineering improvement to magnetic tape recorder, (6) a chair design modeled and accepted by manufacturer, (7) a letterhead design approved by customer, (8) a mathematical theorem regarding NOR-gate circuits, (9) completion of a furniture line design, (10) a new conceptual model of a photon which was found useful, and (11) design of a private dwelling approved by the client." (Fadiman 2011, 132)
James Fadiman, one of the coauthors of this study, describes it and other psychedelic approaches to problem solving in his 2011 book The Psychedelic Explorer's Guide. His valuable descriptions of their process as seen by an investigator — insider and quotations from the problem solvers themselves draw attention to this sleeping giant of psychedelics' future practical problem solving. It is time for researchers to awaken this giant and for federal agencies and local institutional review boards to move forward and encourage creative invention.
It is a widely known "inside secret" that psychedelics also contributed to the rapid innovation and growth of the personal computer industry (Markoff 2006), and probably the greatest monetary payoff from using psychedelics occurred when the problem of a little start-up software company vying with other start-ups for the eyes of potential customers was solved. "The big quandary for software companies was getting into the market place, finding shelf space. But there was a new way of doing that I thought of called 'shareware,' and I think the concept was very unusual, and I think the concept came to some extent from my psychedelic ?experience. . . . So that worked. It worked pretty well." (Wallace 1997)
Bob Wallace's idea was to give away programs and ask people to pay whatever they could and wanted to. Because they were free, thousands of people started using them, and this helped his little, unknown start-up company grab market share so that eventually it could charge for its products and begin to turn a profit: micrograms for Microsoft. Experimental Studies of Abstract Concepts
Much research in the cognitive sciences has to do with memorizing things not worth memorizing, solving silly puzzles, and other unrealistic tasks that lend themselves to clean laboratory research designs but have little relevance in life. This barrier was broken and cognitive studies advanced to higher level thinking thanks to psilocybin. In 2006 and 2008, experiments showed that psychedelics can extend cognitive studies to topics that are important in people's lives but were previously beyond experimentation-meaningfulness and significance among others (Griffiths et al. 2006, 2008, 2011).
In previous chapters we've looked at the implications of these experiments for values and religion; here our concern is their implications for higher level cognitive psychology. They found, "at 2 months, the volunteers rated the psilocybin experiences as having substantial personal meaning and spiritual significance and attributed to the experience sustained positive changes in attitudes and behavior consistent with changes rated by community observers" (Griffiths et al. 2006, 268). To account for the possibility that their volunteers might overrate their own behavior, the Hopkins team interviewed friends and close family members to see if they noticed any changes, which they confirmed. This experiment illustrates how psychedelics can advance experimental studies far beyond trivial attention span and boring digit-memory tasks to the high level abstractions that give meaning to people's lives.
Three written comments express the essence of the participants' experiences. The understanding that in the eyes of God — all people . . . were all equally important and equally loved by God. I have had other transcendent experiences, however, this one was important because it reminded and comforted me that God is truly and unconditionally loving and present.
Freedom from every conceivable thing including time, space, relationships, self, etc. . . . It was as if the embodied "me" experienced ultimate transcendence-even of myself.
A non-self self held/suspended in an almost tactile field of light. (629)
These three samples of enhanced spiritual cognition demonstrate that psychedelics provide a breakthrough for the cognitive sciences: instead of being limited to surveys, random self-reports, and lightly grounded speculation about higher level cognitive processes such as meaningfulness, sacredness, and significance, psychedelics enhance cognitive sciences with an experimental method of investigating these and similar high-level, abstract conceptualizations.
Experimental Religious Studies
With its heavy reliance on words, beliefs, and text, current religion, of course, is heavily cognitive, so it provides another avenue for advancing cognitive studies. "Experimental religious studies" sounds impossible, but thanks to psychedelics it isn't. The findings of Griffiths's group and other reports illustrate one way to use psychedelics to study higher-level abstractions, in this case religious ones.
As mentioned earlier, the best example of the long-term influence of psychedelics on thinking is Rick Doblin's 2001 study "Pahnke's Good Friday Experiment: A Long-term Follow-up and Methodological Critique." Doblin is the founding executive director of the Multidisciplinary Association for Psychedelic Studies. Its website is one of the richest of the psychedelic Internet domains. MAPS is primarily interested in psychotherapy, but reading its publications and website from a cognitive perspective is like stumbling into a great hidden treasure. Doblin's follow-up study documenting the effects of psilocybin given to seminarians a quarter of a century earlier speaks to the power of psychedelics ?as experimental treatments and to mystical experiences as experimental variables. I look forward to reading a "Journal of Experiential Religion."
Cognitive Aspects of Psychedelic Psychotherapy
Psychedelic psychotherapy is more than a treatment. It has implications beyond health; it provides clues to how our minds work. How does thinking change during successful psychotherapy, such as when psilocybin is used to reframe death anxiety in the work of Charles Grob and his coresearcher Alicia Danforth, or MDMA-assisted psychotherapy is used to reduce post-traumatic stress disorder in patients who have been intractable to other treatments, as in the work of South Carolina psychiatrist Michael Mithoefer?* Other clinical leads suggest treating cluster headaches, obsessive-compulsive disorder, neuroses and psychoses, depression, alcoholism, and addiction. Except for cluster headaches, these cures are usually correlated with mystical experiences. Cognitively, what phenomenological shifts occur during mystical experiences, with the power to reframe thoughts, emotions, and identity so much that they apparently often cure death anxiety, post-traumatic stress disorder, and addictions and alcoholism? Hood's mysticism scale and similar measures of mystical experience may provide clues. In his 1996 "The Facilitation of Religious Experience," Hood summarizes the evidence that psychedelics often produce mystical experiences, and in his 2006 "The Common Core Thesis in the Study of Mysticism," he compares phenomenologically derived and empirically derived models of mystical experience. For cognitive scientists who want to study higher order processes experimentally, the items in Hood's scale may be clues to how to study this type of cognitive ?reframing. Improving Intelligence Howard Gardner, best known for his theory of multiple intelligences, defines intelligence as "the ability to solve problems or produce goods of value to society" (1983). The instances cited above meet his standard for intelligence. Unfortunately, Gardner, like other scholars, defines and describes intelligence as it exists only in our ordinary, default mindbody state. A full view of intelligence would include the skillful use of all states. Recognizing that varieties of intelligence exist in states other than our usual awake state raises the question of whether other cognitive processes have their analogs in other mindbody states too, suggesting a future for multistate cognitive science — researching the question, "How does cognition vary from mindbody state to mindbody state?"
In The Triarchic Mind, Robert Sternberg suggests another criterion for intelligence, defining it as "mental self-management" (1988). By that standard, someone who can access a large collection of information-processing programs and their resident abilities is more intelligent than someone with a smaller repertoire. Kary Mullis's learning to strengthen his visualization capacity and transfer it to his usual state is an example. What about someone who is highly skilled at selecting mindbody states, achieving them, and using their resident abilities? Because selecting mindbody states is an executive function prior to the use of specific states, the word metaintelligence may be useful when discussing this kind of intelligence.
Enriching Cognitive Studies Not only can cognitive science investigate cognitive enhancement, but by surpassing its current boundaries it can also accelerate the pace of its own scientific progress. Identifying and characterizing cognitive processes (and other processes) that exist in all mindbody states will demand new talents for skilled psychologists, phenomenologists, and neuropsychologists. In order to develop this agenda, a new generation of researchers needs to become comfortable studying these states both objectively and subjectively.
Cognitively, psychotechnologies are ways of installing information-processing programs in our minds. Among the many possibilities, psychedelics illustrate a vast multistate frontier for the future of cognitive studies, one in which mindbody states are sometimes "independent ?variables" — the things that experimenters change — and sometimes "dependent variables" — the things that change as a result. To put it another way: independent variables are the inputs and dependent variables are the outputs. For example, in the Johns Hopkins studies, psilocybin was the independent variable, and people's experiences were the dependent ones.
Examples include the clinical laboratory experiments we have already looked at in earlier chapters such as those by Griffiths, Grob, Mithoefer, and Grof. Hood's mysticism scale and Lerner and Lyvers's study of values and beliefs of psychedelic users illustrate survey methods. Walsh and Grob's 2005 Higher Wisdom and Badiner's 2008 Zig Zag Zen present in-depth phenomenological interviews. Nichols and Chemel connect chemistry and religious cognition in their article 2006 "The Neuropharmacology of Religious Experience." An Internet search of clinical trials using hallucinogens (excluding cannabis and related compounds) locates more than a dozen current trials, while the MAPS website keeps readers up-to-date on completed, current, and planned research.
Perhaps the most curious and exciting prospect psychedelics offer is their impact on humanistic and religious concepts such as meaningfulness, significance, portentousness, values, transcendence, self-concept, aesthetic perception, identity, beliefs, and sacredness. These abstractions form the vitals of humanistic studies, but until psychedelics, they have been hard to study in experiments. The provocative psychedelic studies throughout this book indicate that these abstractions may become dependent variables when mindbody states are the independent variables.
In 1998 biologist Edward O. Wilson, author of two Pulitzer Prize-?winning books and recipient of other honors and awards, challenged the scientific community to build a multidisciplinary cognitive structure that integrates all branches of knowledge. He called his book and the project Consilience. Psychedelics are a natural for this major league intellectual project. They are naturally interdisciplinary. They link topics from the neurochemistry of our brains to Greek mythology and film criticism. As the Griffiths et al. studies of the effects of psilocybin on personal meaningfulness and sacredness exemplify, psychedelics provide one way to overcome the problem of integrating different lines of inquiry into a multilayered scaffolding of empirical evidence.
How do the chemical, biological, psychological, cognitive, and social levels influence each other? With psychedelics questions such as "How do biochemical changes affect beliefs?" are open to experimentation. Conversely, researchers can experimentally examine the question "How do someone's beliefs and cognitive expectations influence the outcomes of biochemical experimental treatments?" By providing models for independent variables on one level and dependent variables on others, meditation, psychedelics, and other mindbody psychotechnologies provide ready-made roads to advance the consilience project. Wilson recognized this. "Shamans preside over the taking of hallucinogenic drugs and interpret the meaning of the serpents and other apparitions that subsequently emerge" (1998, 72). He reports, "[The shaman's] drug of choice, widely used in the communities of the Rio Ucayali region, is ayahuasca [pronounced eye-uh-WAHS-ska], extracted from the jungle vine Banisteriopsis." Illustrating consilience, he follows this with, "The sacred plants, which have been analyzed by chemists, are no longer mysterious. Their juices are laced with neuromodulators that in large oral doses produce a state of excitation, delirium, and vision" (73). Wilson recognized that chemical input yields cognitive output, yet another instance of the chemical-cognitive relationship that most of the researchers mentioned above have implicitly noted.
Discovering Hidden Parameters of the Mind
Although the mentioned studies have implications for the cognitive sciences, they were not expressly designed to do so. Shanon's The Antipodes of the Mind: Charting the Phenomenology of the Ayahuasca Experience intentionally hybridizes cognitive psychology and psychedelics.
"Not only can a cognitive-psychological analysis make a crucial contribution to the study of Ayahuasca, the converse is also the case — the study of Ayahuasca may have implications of import to our general understanding of the working of the human mind. Ayahuasca (along with other mind-altering substances) expands the horizons of psychology and reveals new, hitherto unknown territories of the mind. Thus the study of Ayahuasca presents new data pertaining to human consciousness, and thus new issues for investigation, new ways to look at things, new questions, and perhaps even new answers." (2002, 37)
Shanon claims that one contribution of studying nonordinary mindbody states is "rendering the parameters of the cognitive system apparent and revealing the various possible values these parameters may take" (196). Will additional explorations into other mindbody states using other psychotechnologies discover still more of cognition's hidden parameters? Many assumptions that singlestate cognitive psychologists make about "givens" are based on data only or predominantly from our usual awake state. Some of their supposedly stable assumptions are really unrecognized variables, taking on other values in other states. By illustrating how the cognitive sciences and psychedelics can inform each other, his work models an enhanced multistate cognitive science.
The Omitted Evidence
Current professional discussions of cognitive enhancement (e.g., the 2008 Committee on Military and Intelligence Methodology for Emergent Neurophysiological and Cognitive/Neural Science Research in the Next Two Decades) and articles in consumer periodicals (Greely et al. 2008; Greely 2009; Talbot 2009) omit the strongest evidence. These omissions all have to do with psychedelics. While the contribution of psychedelics to music (Bromell 2000), art (Masters and Huston 1968; Johnson 2011), religion (Smith 2000; Roberts 2012), medicine (Winkelman and Roberts 2007), and psychotherapy are becoming recognized, recognition of their contributions to cognitive enhancement lags. Whether this omission is due to a simple lack of information or scientists' and scholars' fear for their careers by touching a taboo topic is hard to say; it is probably some of both.
Whatever the reason, the scientific climate is changing, as the title of Morris's 2008 editorial in The Lancet put it, "Research on Psychedelics Moves into the Mainstream" (1,491). It is time for the cognitive studies to wake up. Dormant leads from the 1950s to the 1970s are being picked up now, and four decades of updated research methods in the neurosciences are moving this frontier forward again. Society benefits from intellectual work. If chemicals make that work more efficient, insightful, and creative, isn't it a professional duty for intellectuals to work as well as they can by using chemical cognitive enhancers?
As important as psychedelics are for enhancing cognition, strengthening intelligence, and fulfilling cognitive studies, the psychedelic group is only one group of mindbody techniques among others. Meditation, biofeedback and neurofeedback, the martial arts, yoga, breathing techniques, contemplative prayer, and selected exercise routines, rites of passage, and vision quests are other ways of producing a fuller range of mindbody states. They deserve careful attention too. Chapters parallel to this one could be written for each of them and for others.
Thomas B. Roberts is author of The Psychedelic Future of the Mind and Psychedelic Horizons, and editor of Spiritual Growth with Entheogens. Shares | 科技 |
2016-40/4016/en_head.json.gz/4185 | Legend of Heroes Sheds Tear for Retailers
BRYAN BOULETTESTAFF REPORTER
Bandai has shipped its PlayStation Portable RPG, The Legend of Heroes: A Tear of Vermillion, to North American retailers. The game is rated "T" for Teen and has a manufacturer's suggested retail price of $39.99.
A Tear of Vermillion is the second title in what is known as the Gagharv Trilogy in Japan, following The White Witch and preceding Cagesong of the Ocean. Bandai's latest, however, is the first game of the trilogy to see release outside of Japan, in addition to being the first traditional roleplaying game for the PSP.
RPGamer recently sat down with Legend of Heroes producer Norihiko Ushimura, and you can read that interview here. Those interested in purchasing the game may find RPGamer's complete review here.
The Legend of Heroes: A Tear of Vermillion | 科技 |
2016-40/4016/en_head.json.gz/4274 | Microsoft to pay less than market rate at new digs inside 11 Times Square
Next » From left: Steve Pozycki and 11 Times Square
Microsoft will pay rent in the low $60s per square foot to lease its 200,000-square-foot office space at 11 Times Square—a good deal compared to the roughly $75-per-square-foot that comparable properties are commanding, the Wall Street Journal reported. The deal is the result of the building’s developer — a joint venture between Steven Pozycki, the CEO of SJP Properties, and Prudential Financial — wanting to secure a major tenant inside the 1.1 million-square-foot building.
As previously reported, Microsoft had also looked at renewing its lease at 1290 Sixth Avenue or setting up shop at 641 Sixth Avenue before inking the 11 Times Square deal in late December.
The building is located across from the Port Authority bus terminal. Additional terms of the lease were not disclosed.
“Microsoft’s decision to lease space at 11 Times Square is another significant sign that this building is being targeted by top-tier global firms looking to maintain their competitive edge by locating within transit hubs that provide unparalleled access and convenience to their employees, clients and customers,” Pozycki said in a separate statement. [WSJ] —Zachary Kussin
Tags: 11 Times Square, microsoft Short URL | 科技 |
2016-40/4016/en_head.json.gz/4307 | Government of Canada Invests in Major Science Facilities
OTTAWA, ONTARIO -- (Marketwire) -- 01/22/13 -- The Government of Canada, through the Canada Foundation for Innovation (CFI), is investing in maintenance and operating support for Canada's high-performing, internationally renowned research facilities. Canada's synchrotron research installation, a national high-performance computing platform, and a world-class underground neutrino and dark matter physics laboratory are all receiving funding from CFI's Major Science Initiatives fund. The investment will enable Canada's best and brightest researchers to carry out internationally competitive research, which will benefit Canadian families and businesses.
"Canada is a world leader in innovation," said the Honourable Gary Goodyear, Minister of State (Science and Technology). "By investing in major research facilities, such as these, our Government is helping Canada's research community reach new heights, address national priorities and meet global challenges."
The $145 million in funding announced today will help sustain scientific excellence at three major facilities:
-- The Canadian Light Source at the University of Saskatchewan, where
researchers are working with the scientific community to promote the use
of synchrotron light, creating industrial partnerships and innovation,
and engaging in scientific and educational outreach in sectors ranging
from mining to healthcare;
-- Compute Canada Calcul Canada, a national network of computing resources
designed to keep Canada competitive in digital research and analysis,
working to ensure that Canadian researchers have the computational
facilities and expert services necessary to advance scientific knowledge
and innovation;
-- SNOLAB, a world-class neutrino and dark matter physics laboratory
located two kilometres below the Earth's surface in Sudbury, Ont., which
is expected to generate $93 million in economic activity for the Ontario
economy over the next five years.
CIO, CTO & Developer Resources The funding announcement is in addition to the $32 million announced for Ocean Networks Canada in October 2012. "These facilities are all major drivers of economic and scientific productivity in Canada," said Gilles G. Patry, President and CEO of the CFI. "We are pleased to be playing a role in their continued success." About Major Sciences initiatives (MSI) A major science initiative addresses a set of significant leading-edge scientific problems or questions. The scope of these areas of research-ocean and earth sciences, for example-is so significant and complex that it requires unusually large-scale facilities and equipment, substantial human resources, and complex operating and maintenance activities. These projects have a lifecycle extending many years. The funding for MSIs announced today is part of Budget 2010.
About the Canada Foundation for Innovation
The Canada Foundation for Innovation gives researchers the tools they need to think big and innovate. By investing in state-of-the-art facilities and equipment in Canada's universities, colleges, research hospitals and non-profit research institutions, the CFI is helping to attract and retain the world's top talent, to train the next generation of researchers, to support private-sector innovation and to create high-quality jobs that strengthen the economy and improve the quality of life for all Canadians. For more information, visit www.innovation.ca.
Ryan Saxby Hill
Canada Foundation for Innovation
613-294-6247 (mobile)[email protected]
Michele-Jamali Paquette
Office of the Honourable Gary Goodyear
Minister of State (Science and Technology) | 科技 |
2016-40/4016/en_head.json.gz/4314 | May 19, 2014, 4:33pm EDT | Updated: May 19, 2014, 4:51pm EDT
How Mattel's 'Project Platypus' could open Google Glass to women
Google Glass is largely thought of as a technology used by mostly men, something that the new person in charge may be perfectly qualified to change. Flickr / Creative Commons / Jessica Mullen
More from Technology
iPhones afflicted with "touch disease" alleges lawsuit against Apple
From apprentice to C-suite, how Splunk's CTO rose through the ranks
Will consumers get any piece of Theranos to pick over?
The 9 most high-profile data security breaches in the past 12 months (SLIDESHOW)
Michael del Castillo
Upstart Business Journal Technology & Innovation Editor Email
The UpTake: Google Glass is largely thought of as a technology used mostly by men, something that the new person in charge, Ivy Ross may be perfectly qualified to change. L ast week Google announced a surprising new head of Google Glass: Ivy Ross, a marketing executive who most recently worked as Art.com’s chief marketing officer. The move is perplexing, in that she has almost no direct technology background, and will lead the development of such an iconic, potentially revolutionary piece of technology.
That is, until you take a look at Project Platypus, a little-known research team she gathered way back in 2001, when she was Mattel’s vice-president of worldwide girls design. Her experience then could help her—and Google—change the perception that Glass is widely being used just by men.
"The rebel in me got this going," she told Fast Company in a 2002 report. "I wanted to prove that people don't have to be put in narrow boxes. Designers aren't the only people who can create toys. If you put a bunch of creative thinkers in the right environment and drop the job titles, you'll discover amazing creativity."
Sound familiar, Google X?
Previously, Ross led a cross-disciplinary team in the development of Ello, a Lego-like “creation system” built on the principle that girls like to build things as much as boys, “Girls simply build differently from boys,” according to the report. Platypus was intended to take that idea, and with a team of 12 people “playing” in shifts for three-month blocks, set out to re-imagine the way children play with toys. After just five weeks her team had created 33 toy ideas, according to the report.
TechCrunch speculates that Google may have hired Ross because of her time as the VP of design and development for Outlook Eyewear. But with so many more qualified individuals already employed at Google, Ross’s biggest asset is almost certainly her ability to see technology as something that transcends gender. We’ve reached out to Google for a breakdown of Glass’s user demographics, and in the meantime, there’s plenty to learn from some unofficial studies.
Last year, following the creation of an entire Tumblr account dedicated to the disproportionate amount of white men wearing Google Glass compared to women and people of other skin colors, PC Magazine set out to photograph every person they saw wearing the technology, and came out with this painfully skewed, even if unofficial, conclusion: “Google Glass, it seems, doesn't have a "white male problem" so much as it just has a "male problem," according to the report.
TechCrunch did its own study and found that in March of last year 80 percent of Google Glass posts were from men, and 13 percent from women, with the remainder comprised of “uncertain cases.” On Twitter, for authors whose first names were easily assigned a gender, 80 percent were male, and 20 percent female.
Now that Glass is finally available for $1,500 to the general public in the United States, such a gender gap translates into lost cold, hard cash. With Ross’s proven ability to create products that transcend traditional gender lines, and marketing experience to boot, she could be just the woman to help bring Glass to the masses of both genders.
Upstart Business Journal Technology & Innovation Editor
Michael del Castillo is the technology and innovation reporter at Upstart Business Journal, a member of American City Business Journals. A graduate of Columbia University, his work has appeared in the New Yorker. He is also the cofounder of Literary Manhattan, a nonprofit dedicated to promoting Manhattan’s literary community and creating new ways to appreciate literature. | 科技 |
2016-40/4016/en_head.json.gz/4401 | Open-cast mines around 3, september 2010
Dmitri Melinchuk
Copyright: Dmitri Melinchuk
Tags: russia; prokopievsk; open-cast mine; autumn; 2010; bicycle
More About Russia
The World : Asia : Russia
Just in case you mistakenly heard that it was all ice and snow in Russia, take a peek at the Big Bikini Exposition. This is right on the river Moskva in Moscow!Moscow has been the capital of Russia for almost its entire history. The exception is during the period of the Russian Empire, which lasted from 1721 until the Russian Revolution 1917. For these two centuries the capital was St. Petersburg. The Russian Empire was the second largest contiguous Empire in world memory; only the Mongol Empire had been greater.Check out what's happening north of Mongolia these days, in ChitaAlthough you may not have heard of Sochi, on the Black Sea, they're building up quickly and hope to host the 2014 Olympics.Other periods of Russian history include the Tsardom of Russia, from Ivan IV to Peter the Great, and the Grand Duchy (14th-16th centuries).The earliest period of Russian history was ruled by the Novgorod Republic and Kievan Rus, which was the first Russian state dating back to 800AD in Kiev.Modern Russia remains one of the world's superpowers. They launched the earth's second satellite, called Sputnik 1, and were the first country to put a human being into orbit around earth. (The first one is called the Moon.)After the breakup of the Soviet Union in 1991, Russia became a federal republic of 83 states.Text by Steve Smith. | 科技 |
2016-40/4016/en_head.json.gz/4440 | Space | Air & Space Magazine
Voices from the Moon
What it was like, in the astronauts’ own words.
Andrew Chaikin and Victoria Kohl
Neil Armstrong, Apollo 11 Commander
(NASA)
In my view, the emotional moment was the landing. That was human contact with the moon, the landing. The fact that we were eight feet...or ten feet separated from the surface of the moon rather than two inches at the time I was [standing on it]...didn't seem to me like a significant difference. It was at the time when we landed that we were there, we were in the lunar environment, the lunar gravity. That, in my view was the—that was the emotional high. And the business of getting down the ladder to me was much less significant. You know, I wouldn't have focused on that at all except that the press and everyone was making so much of a big thing about the exit from the vehicle and step on the surface with the boot.
(Photo: Armstrong (right) and Buzz Aldrin walk back to their lunar module after raising the American flag on the Sea of Tranquillity.)
Apollo on the Moon
"Amiable Strangers"
About Andrew Chaikin
Andrew Chaikin is the author of A Man on the Moon, A Passion for Mars, and other books on space exploration. He has been an adviser to NASA on space policy. Read more from this author
| Follow @andrewchaikin | 科技 |
2016-40/4016/en_head.json.gz/4477 | Download iTunes
Thank you for downloading iTunes.
You’re all set to discover and enjoy music, movies, TV shows, and more on your Mac, PC, iPhone, iPad, and iPod. You can also join Apple Music, where you’ll find just about every song ever recorded. And you can tune in to our free flagship radio station, Beats 1, streaming 24/7 and bringing you the latest music, interviews, and culture.
All the music you love. Free for three months.1
Join Apple Music now and get unlimited access to millions of songs, the best music and culture on Beats 1 radio, and expertly curated radio stations. All of it available anytime you want. Learn more
Try it free for three months
The movie and TV collection you always wished for.
With over 85,000 movies and 300,000 TV shows to choose from, there’s always something great to watch on iTunes.2 Catch up on your favorite shows or hit movies — anytime, anywhere.
New members only. Sign-up required. Membership automatically renews monthly after trial.
Refers to the total number worldwide. Not all content is available in all countries.
To learn how Apple safeguards your personal information, please review the Apple Customer Privacy policy.
iTunes is licensed for reproduction of noncopyrighted materials or materials the user is legally permitted to reproduce. Purchases from the iTunes Store, some features, products, and content types are not available in all countries. See Terms of Sale. | 科技 |
2016-40/4016/en_head.json.gz/4478 | Edison. Ford. Disney. Jobs.
An era has ended, and we now sit to reflect on our good fortune for having lived in a time when a true giant walked the Earth. I had certainly contemplated his passing many times, but now that it has happened, I am struggling to grasp the concept that Steven Paul Jobs is gone and not coming back.
You can love or hate the man, his company, and his products. You can simply not care much either way. But there is no disputing that everything we know and think about technology today has been dramatically influenced in one way or another by Steve Jobs. His vision and leadership have repeatedly changed millions of lives for the better. He is one of the most significant individuals of our generation, of the last century, and when all is said and done, probably in the history of this world.
It’s not every day that the President of the United States writes a two-hundred word eulogy. To really see Steve’s impact on the world, though, you need to turn to his adversaries. Google’s Vic Gundotra, who famously skewered the man and his vision just last year, went out of his way to tell a long story of admiration and respect on Google+ when Jobs resigned as Apple’s CEO in August. Sergey Brin, Larry Page, and yes, Bill Gates have all chimed in following Wednesday’s news. Steve’s passing is a loss for the world. We are now left to think about what could have been.
It was only the day before that we watched an Apple event launching new hardware and software products. Steve was not in the building, but he sure as hell was watching on Tuesday. And I’m sure he was reviewing slides and demos to the very end. He held on for one last launch, the first out from under his tenure, to see for himself that his legacy was intact.
And so more than ever, I find myself inspired. Steve’s untimely death reminds us we can never give up. He could have given up at any point in the seven years since his first cancer diagnosis, but he did not. The vast majority of Apple’s unprecedented resurgence took place while Steve Jobs stared death in the face. How many of us could have lasted this long at all, let alone accomplish all that he did along the way?
Ten years ago today, we still had not yet met the iPod. The last of Steve’s five decades on this Earth ended up being his most accomplished by far. Remember that whenever you think your best days are behind you. We can’t control when our lives begin, and we can’t really control when they end. All we have is what’s in between. Make it count.
Steve did.
© 2010-2016 Matt Drance. Modern browsers recommended. contact | 科技 |
2016-40/4016/en_head.json.gz/4576 | Securing biomass's position and meeting RPS goals
By Lisa GibsonBiomass is the No. 1 renewable energy resource for the Sacramento Municipal Utility District, holding a steady 41 percent of energy from renewables, according to Michael DeAngelis, manager of SMUD's Advanced, Renewable & Distributed Generation Technologies Program.
DeAngelis was the first speaker in the panel titled Securing the Position of Biomass in California's Energy Portfolio at the Pacific West Biomass Conference & Expo in Sacramento. "It's clear that renewable energy and biomass are here to stay," DeAngelis told attendees. "Biomass has become a very significant portion of our energy supply in the Sacramento region. We expect it to remain a major portion of our supply." That biomass energy supply consists primarily of wood waste cogeneration, with five landfill and wastewater gas projects, along with the use of biomethane in the pipeline, which he said is the fastest-growing of all SMUD's renewable energy options.
The publicly owned utility has the most aggressive energy efficiency goals-15 percent in 10 years-of any large utility in the state of California, DeAngelis said. In addition, the company expects to be the only large utility to meet California's renewable portfolio standard (RPS) of 20 percent by 2010. Overall, the state is not on track to achieve that goal, according to Michael Leaon, supervisor of the Integrated Energy & Climate Change Unit of the California Energy Commission. Additionally, the Governor's Executive Order establishes a 33 percent RPS by 2020, with 20 percent from biopower, Leaon said. Statewide, 20 percent of renewable energy in 2008 came from biomass, 70 percent of that from plants that came on line by 2000, he added.
Financial support has the potential to spur biomass development and help reach RPS goals, Leaon told attendees. The state offers the Existing Renewable Facilities Program, which provides incentive payments for energy generated from solid biomass. "This has been an important program for supporting solid biomass facilities," he said. Gregg Morris, director of the Green Power Institute, agreed that California's RPS is too lofty a goal and current progress has not kept up. "There's not a chance that we will achieve that 20 percent target by 2010," he said. "We want our renewable production overall in California to approximately triple compared with where it was in 2002." The RPS requires utilities to increase renewable energy sources by 1 percent annually, but they have instead declined since the RPS was established in 2002, he said. "They've actually fallen behind every single year since it went into effect." Biomass made up 22 percent of total renewable energy in California in 2003, decreasing to 20 percent in 2008. In that time, the mandate required 43 percent renewable energy generation, but only 3.4 percent was achieved. The reason: Biomass is expensive, Morris said. "But the fact is, it's also expensive not to do it," he said, adding that landfill burial and burning of waste are more costly alternatives. Since 1980, 60 biomass plants have been built in California, only half of which are still operational. The next steps to increasing biomass use are continued support for existing facilities, bioenergy banding within the RPS, targeted credits for specific biomass facilities and generating credits for bioenergy, Morris said. David Bischel, president and CEO of the California Forestry Association talked about the untapped potential for woody biomass in California's forestlands. One-third of California's 100 million acres is forestland. The state has more than twice as much wood standing dead in forests as there is wood being harvested, he said. More than 10 million acres are at high or very high risk of catastrophic fire, driven by accumulated fuel loads. Climate change could make that fire threat worse, Bischel said. "The opportunities for sustainable biomass utilization are immense in dealing with this issue," he said. Reducing that fuel load also protects water quality and lowers the cost of cleaning it for consumption.
There are 14 million bone-dry tons of potential woody biomass that needs to be removed from forests, Bischel said, with the potential to produce 1,750 megawatts of electricity and 17,000 new jobs. Challenges include the cost, definition of biomass and regulations. "I couldn't agree more that California is really in a state of regulatory gridlock," he said. But the will, motivation and resources on the part of landowners are what will determine how far woody biomass can go, he added.
-Lisa GibsonRelated ArticlesBusiness BriefsEIA’s Annual Energy Outlook 2016 addresses bioenergy 299-MW UK biomass plant selects boiler supplierIona Capital invests in UK biomass project Biomass Policy: States Doing Their PartBill includes 2-year extension of tax credit for renewable diesel 0 Responses Leave a Reply | 科技 |
2016-40/4016/en_head.json.gz/4629 | 10/14/2012 08:07:02 PM MDTROSWELL, N.M.—In a giant leap from more than 24 miles up, a daredevil skydiver shattered the sound barrier Sunday while making the highest jump ever—a tumbling, death-defying plunge from a balloon to a safe landing in the New Mexico desert. Felix Baumgartner hit Mach 1.24, or 833.9 mph, according to preliminary data, and became the first person to reach supersonic speed without traveling in a jet or a spacecraft after hopping out of a capsule that had reached an altitude of 128,100 feet above the Earth. Landing on his feet in the desert, the man known as "Fearless Felix" lifted his arms in victory to the cheers of jubilant friends and spectators who closely followed his descent in a live television feed at the command center "When I was standing there on top of the world, you become so humble, you do not think about breaking records anymore, you do not think about gaining scientific data," he said after the jump. "The only thing you want is to come back alive." A worldwide audience watched live on the Internet via cameras mounted on his capsule as Baumgartner, wearing a pressurized suit, stood in the doorway of his pod, gave a thumbs-up and leapt into the stratosphere. "Sometimes we have to get really high to see how small we are," an exuberant Baumgartner told reporters outside mission control after the jump. Baumgartner's descent lasted just over nine minutes, about half of it in a free fall of 119,846 feet, according to Brian Utley, a jump observer from the FAI, an international group that works to determine and maintain the integrity of aviation records. He said the speed calculations were preliminary figures.Advertisement
During the first part of Baumgartner's free fall, anxious onlookers at the command center held their breath as he appeared to spin uncontrollably. "When I was spinning first 10, 20 seconds, I never thought I was going to lose my life but I was disappointed because I'm going to lose my record. I put seven years of my life into this," he said. He added: "In that situation, when you spin around, it's like hell and you don't know if you can get out of that spin or not. Of course it was terrifying. I was fighting all the way down because I knew that there must be a moment where I can handle it." Baumgartner said traveling faster than sound is "hard to describe because you don't feel it." The pressurized suit prevented him from feeling the rushing air or even the loud noise he made when breaking the sound barrier. With no reference points, "you don't know how fast you travel," he said. The 43-year-old former Austrian paratrooper with more than 2,500 jumps behind him had taken off early Sunday in a capsule carried by a 55-story ultra-thin helium balloon. His ascent was tense at times and included concerns about how well his facial shield was working. Any contact with the capsule on his exit could have torn his suit, a rip that could expose him to a lack of oxygen and temperatures as low as minus-70 degrees. That could have caused lethal bubbles to form in his bodily fluids. But none of that happened. He activated his parachute as he neared Earth, gently gliding into the desert about 40 miles east of Roswell and landing smoothly. The images triggered another loud cheer from onlookers at mission control, among them his mother, Eva Baumgartner, who was overcome with emotion, crying. He then was taken by helicopter to meet fellow members of his team, whom he hugged in celebration. Coincidentally, Baumgartner's accomplishment came on the 65th anniversary of the day that U.S. test pilot Chuck Yeager became the first person to officially break the sound barrier in a jet. Yeager, in fact, commemorated that feat on Sunday, flying in the back seat of an F-15 Eagle as it broke the sound barrier at more than 30,000 feet above California's Mojave Desert. At Baumgartner's insistence, some 30 cameras recorded his stunt. Shortly after launch, screens at mission control showed the capsule, dangling from the massive balloon, as it rose gracefully above the New Mexico desert, with cheers erupting from organizers. Baumgartner could be seen on video, calmly checking instruments inside. The dive was, in fact, more than just a stunt. NASA is eager to improve its blueprints for future spacesuits. Baumgartner's team included Joe Kittinger, who first tried to break the sound barrier from 19.5 miles up in 1960, reaching speeds of 614 mph. With Kittinger inside mission control, the two men could be heard going over technical details during the ascension. "Our guardian angel will take care of you," Kittinger radioed to Baumgartner around the 100,000-foot mark. An hour into the flight, Baumgartner had ascended more than 63,000 feet and had gone through a trial run of the jump sequence. Ballast was dropped to speed up the ascent. Kittinger told him, "Everything is in the green. Doing great." As Baumgartner ascended, so did the number of viewers watching on YouTube; company officials said the event broke a site record with more than 8 million simultaneous live streams at its peak. After Baumgartner landed, his sponsor, Red Bull, posted a picture of him on his knees on the ground to Facebook, generating nearly 216,000 likes, 10,000 comments and more than 29,000 shares in less than 40 minutes. On Twitter, half the worldwide trending topics had something to do with the jump, pushing past seven NFL football games. Among them was this tweet from NASA: "Congratulations to Felix Baumgartner and RedBull Stratos on record-breaking leap from the edge of space!" This attempt marked the end of a long road for Baumgartner, a record-setting high-altitude jumper. He already made two preparation jumps in the area, one from 15 miles high and another from 18 miles high. He has said that this was his final jump. Red Bull has never said how much the long-running, complex project cost. Although he broke the sound barrier, the highest manned-balloon flight record and became the man to jump from the highest altitude, he failed to break Kittinger's 5 minute and 35 second longest free fall record. Baumgartner's was timed at 4 minutes and 20 seconds in free fall. He said he opened his parachute at 5,000 feet because that was the plan. "I was putting everything out there, and hope for the best and if we left one record for Joe—hey, it's fine," he said when asked if he intentionally left the record for Kittinger to hold. "We needed Joe Kittinger to help us break his own record and that tells the story of how difficult it was and how smart they were in the 60's. He is 84 years old, and he is still so bright and intelligent and enthusiastic." Baumgartner has said he plans to settle down with his girlfriend and fly helicopters on mountain rescue and firefighting missions in the U.S. and Austria. Before that, though, he said, "I'll go back to LA to chill out for a few days ... will take it easy as hell, trust me." ——— AP Science Writer Alicia Chang and Associated Press writer Christopher Weber in Los Angeles contributed to this report.Print Email Font ResizeReturn to Top RELATED | 科技 |
2016-40/4016/en_head.json.gz/4676 | FTC Opens Formal Probe Of Intel
Shown is the Intel logo outside their Robert N. Noyce building in Santa Clara, Calif., Monday, July 16, 2007. Intel Corp. reports second-quarter earnings on Tuesday, July 17, 2007. (AP Photo/Eric Risberg)
AP Photo/Eric Risberg
Escalating the world's largest computer chip maker's legal woes, the Federal Trade Commission has opened a formal probe into Intel's sales tactics, a victory for its much smaller rival, Advanced Micro Devices Inc.Intel disclosed Friday that it has received a subpoena from the FTC for records about Intel's microprocessor sales, which dominate the world market with a roughly 80 percent share.The FTC's two-year investigation had been considered "informal" until that point, and Intel, which is already fighting antitrust charges in the European Union and was fined this week by antitrust regulators in South Korea, said it had been cooperating.By opening a formal investigation, Intel said, the FTC will be able to get access to documents revealing Intel's communications with certain customers - documents Intel couldn't voluntarily provide because of a protective order that is part of a sweeping antitrust lawsuit AMD filed in 2005 that isn't expected to go to trial until 2010."From our perspective, it's not a surprising event nor is there any really substantive change in the relationship we've had with the FTC," Bruce Sewell, Intel's general counsel, said in an interview.The FTC's intensifying look at Intel's business practices is a result of AMD's long-running campaign to convince antitrust regulators around the world that its business has been hurt by Intel's aggressive tactics. AMD also said Friday that it received a subpoena this week from the FTC - though the company said it is not a target of the investigation.The two companies have been fighting for years over what AMD claims is Intel's intimidation of computer makers into striking exclusive deals for the chips they use in their new machines.AMD claims the rebates and financial incentives Intel offers to those companies for buying more Intel chips are designed to prevent AMD from gaining market share - and that Intel threatens those manufacturers that it will retaliate if they introduce models based on AMD's chips.AMD argues that Intel's volume discounts are sometimes so steep that AMD can't cut its own prices enough to compete without losing money on the sales.Intel has repeatedly denied breaking any laws. It said Friday that the sharp drop in microprocessor prices over the past seven years shows that the "evidence that this industry is fiercely competitive and working is compelling."In an interview last week with The Associated Press, before the company received the subpoena, Intel Chief Executive Paul Otellini noted that Intel has been investigated by the FTC and the Department of Justice before, and he said he stands by the company's actions."I think there's nothing we've done that warrants further investigation by the U.S. government," Otellini said.Should the FTC find Intel violated federal law, Intel could face severe fines, and the way the world's computer chips are bought and sold could change.AMD said the probe helps it hold Intel accountable for sales strategies that it argues have hurt AMD's business and technology consumers."Intel must now answer to the Federal Trade Commission, which is the appropriate way to determine the impact of Intel practices on U.S. consumers and technology businesses," Tom McCoy, AMD's executive vice president and chief administrative officer, said in a statement. "In every country around the world where Intel's business practices have been investigated, including the decision by South Korea this week, antitrust regulators have taken action."Another major legal headache for Intel is the lawsuit AMD filed against it in U.S. District Court in Delaware in 2005 - a case that could mean billions of dollars in damages if AMD wins. The parties are now exchanging documents in the "discovery" phase of that case.AMD's complaints have also triggered antitrust investigations in several countries outside their home U.S. market as well.The European Union has accused Intel of paying manufacturers to delay or cancel product lines using AMD chips and selling the manufacturers its own chips below the average cost of producing them.And on Thursday, Intel was slapped with a $25.4 million fine by the Korea Fair Trade Commission, which accused the semiconductor giant of using hefty rebates to convince Samsung Electronics Co. and other South Korean computer makers to not use central processing units, or CPUs, manufactured by AMD.Intel shares fell 78 cents, or 3.3 percent, to $23.09 in midday trading. AMD shares fell 26 cents, or 3.3 percent, to $7.52. | 科技 |
2016-40/4016/en_head.json.gz/4843 | News Science Great Dismal Swamp fire could help with tree restoration
There hasn’t been much good to say about the Great Dismal Swamp wildfire.In less than three weeks, it has torched 6,000 acres and the resulting smoke continues to pose health risks from Suffolk to Gloucester County and beyond.
There is, however, a potential positive effect.Believed to have started Aug. 4 by lightning, the fire is fed by peat — a carbon-rich blanket of vegetation that covers the swamp floor. It can be as deep as 30 feet.
The accumulation of peat can be problematic, according to Christopher Newport University professor Robert Atkinson, because in many places the water level sits below it. Consequently, the swamp becomes extremely dry and susceptible to wildfires.An optimistic view is that the peat will burn down to the current water level making the parched swamp soggy. If so, conditions could be right for the resurgence of the Atlantic white cedar and other vegetation, Atkinson said in an email.The Dismal Swamp was once home to the largest stand of white cedars, which are found from Maine to Mississippi. Hundreds of acres of cedars, which collect and store more carbon than any other ecosystem in North America, were burned in a 2008 wildfire.Atkinson, his students, the federal government and other groups replanted the trees only to see the fire burn them earlier this month.Whenever the fire stops — remnants of Hurricane Irene, anyone? — Atkinson said he plans to examine its effect on the swamp. From there, he hopes to restart the cedar restoration project.
Great Dismal Swamp | 科技 |
2016-40/4016/en_head.json.gz/4852 | CERN Turns on the LHC
55 comment(s) - last by docinct.. on Sep 13 at 4:46 PM
CERN's massive Large Hadron Collider went online today, performing even better than expect. It's now the world's largest particle accelerator and it's scheduled to start probing the universe's most puzzling questions in just a few short months. (Source: CERN)
The launch of the world's largest particle accelerator is going almost seamlessly thus far
CERN's Large Hadron Collider (LHC) has gone online, becoming the world's largest operational particle collider. The LHC was the result of $9B USD and years of collaboration from researchers worldwide. It promises to unlock great mysteries such as the Higgs boson and deeper insight into how antimatter behaves.
The launch did have its share of hassles. First, researchers were alarmed by death threats from fearful observers who worried the device would generate huge black holes, despite reassurance from the world's top scientists that any tiny black holes that did arise would quickly evaporate. Second, according to CERN officials, late last night the LHC was experiencing some "small electrical problems".
None of these issues could put a damper on the launch though and it continued on schedule. It turned on at 9:30 AM CEST and at 9:49 AM the first beam of protons was fired through the first 3-km of the 27-km ring. It took 48 seconds to generate the pulse. Firing ramped up and by 10:25 AM the proton beam was travelling the entire track. The tests went quicker and had fewer issues than expected. Counterclockwise beams are currently being tested.
CERN expects the LHC to be fully operation and unlocking the mysteries of the universe within a few months based on the strong initial testing. After the counterclockwise tests, the next step will be to perform the first atom smashing later this month, colliding two proton beams together. Expect to hear much more news about the world's largest particle accelerator in the near future.
RE: and. . .
oh REALLY I wouldn't have guessed Parent
Large Hadron Collider to Go Online This Week Despite Death Threats
Anonymous Donor Saves Last U.S. Particle Physics Lab From Going Under
Physicists Create ATLAS Detector in Large Hadron Collider | 科技 |
2016-40/4016/en_head.json.gz/4951 | HP buys beleaguered Palm
Wed, Apr 28, 2010 News Well this is certainly a tech bombshell. It’s no surprise that Palm was looking for a buyer after their efforts to become a player in the smartphone market didn’t go according to plan. But what is surprising is who stepped in to purchase them – Hewelett Packard.
In a deal worth $1.2 billion, HP swooped in today and rescued Palm from the brink of obscurity. Palm CEO, and former Apple excutive, Jon Rubenstein is expected to remain with the company, though in what capacity remains unclear. You can check out the press release in its entirety after the break.
PALO ALTO, Calif. & SUNNYVALE, Calif.–(BUSINESS WIRE)–HP (NYSE: HPQ – News) and Palm, Inc. (NASDAQ: PALM – News) today announced that they have entered into a definitive agreement under which HP will purchase Palm, a provider of smartphones powered by the Palm webOS mobile operating system, at a price of $5.70 per share of Palm common stock in cash or an enterprise value of approximately $1.2 billion. The transaction has been approved by the HP and Palm boards of directors.
The combination of HP’s global scale and financial strength with Palm’s unparalleled webOS platform will enhance HP’s ability to participate more aggressively in the fast-growing, highly profitable smartphone and connected mobile device markets. Palm’s unique webOS will allow HP to take advantage of features such as true multitasking and always up-to-date information sharing across applications.
“Palm’s innovative operating system provides an ideal platform to expand HP’s mobility strategy and create a unique HP experience spanning multiple mobile connected devices,” said Todd Bradley, executive vice president, Personal Systems Group, HP. “And, Palm possesses significant IP assets and has a highly skilled team. The smartphone market is large, profitable and rapidly growing, and companies that can provide an integrated device and experience command a higher share. Advances in mobility are offering significant opportunities, and HP intends to be a leader in this market.”
“We’re thrilled by HP’s vote of confidence in Palm’s technological leadership, which delivered Palm webOS and iconic products such as the Palm Pre. HP’s longstanding culture of innovation, scale and global operating resources make it the perfect partner to rapidly accelerate the growth of webOS,” said Jon Rubinstein, chairman and chief executive officer, Palm. “We look forward to working with HP to continue to deliver industry-leading mobile experiences to our customers and business partners.”
Under the terms of the merger agreement, Palm stockholders will receive $5.70 in cash for each share of Palm common stock that they hold at the closing of the merger. The merger consideration takes into account the updated guidance and other financial information being released by Palm this afternoon. The acquisition is subject to customary closing conditions, including the receipt of domestic and foreign regulatory approvals and the approval of Palm’s stockholders. The transaction is expected to close during HP’s third fiscal quarter ending July 31, 2010.
Palm’s current chairman and CEO, Jon Rubinstein, is expected to remain with the company.
Hewlett Packard, Palm 1 Comments For This Post
Josh p Says:
April 28th, 2010 at 9:41 pm And my gosh, if only we had bought it last week!!!!!! Ca ching 🙁 | 科技 |
2016-40/4016/en_head.json.gz/5052 | Amazon Could Be the Next Brick-and-Mortar Giant
In an ironic twist of fate, the company poised to take over now-bankruptRadioShack stores might be the one most responsible for putting them out of business in the first place.
Amazon.com, the e-commerce giant synonymous with retail destruction and disruption, is rumored to be in talks with RadioShack about moving into select store locations. Amazon would joinSprintand Brookstone in the pursuit of some of this prized real estate.
RadioShack has over 4,000 locations nationwide, though the number Amazon might acquire has not been revealed. Sprint, meanwhile, has expressed interest in taking over 1,300 to 2,000 locations.
This is not your mama's AmazonFive to ten years ago, this move would have been unthinkable for the online juggernaut, but Amazon has changed. For years, the company fought tooth and nail against sales taxes and maintained distribution centers in only a few states, in order to avoid collecting them.
However, as both Amazon and the tax protests grew,the company ceded ground -- it now collects sales tax in 23 statesand has added dozens of distribution centers across the country in order to improve delivery speeds. Now that it collects sales tax from a majority of the U.S. population, expanding into physical retail locations is not as surprising a step as it once might have been.
Similarly, Amazon now makes a number of tech gadgets and has already experimented with pop-up kiosks, vending machines, and other mall displays in order to promote these products.The pop-up stores have featured Kindle e-readers, Fire tablets, Amazon Fire TV, and Fire Phones, and Amazon has stepped up its innovation with products including the Echo speaker and the Dash scanner.
ConsideringApple'swildly successful move to open its own stores andMicrosoft'sdecision to follow suit, it makes sense to see Amazon aiming to show off its own hardware in the real world.
The next era of retailAs Amazon sales have skyrocketed, other retailers have intensified the competition and begun to catch up. A few years ago, Amazon was stealing customers from Best Buy and other retailers due to "showrooming," where shoppers would look at items in brick-and-mortar stores and then buy them at a lower price on Amazon.
Since then, traditional retailers have become more savvy, stepping up their e-commerce platforms and making prices competitive. As a result, many big box businesses have begun taking e-commerce market share from Amazon, and browsing online only to buy in-store is now more common than showrooming, according to some surveys.
Amazon also seems to have realized that, despite the Prime two-day free shipping promise, nothing beats the satisfaction of getting an item immediately, and in-store pickup and ship-from-store capabilities give the brick-and-mortar chains a significant advantage. The company has made same-day delivery a bigger priority than ever. It recently introduced the Prime Now service, currently available only in Manhattan, which offers free same-day delivery to Prime members, along with one-hour delivery for a fee. The service competes withGoogleExpress, which partners with retailers to provide one-hour delivery in select cities.
This is clearly the next front in the retail wars, making the acquisition of select RadioShack stores a logical strategy for Amazon. Not only would it enable the company to display and sell its new gadgets, it would also serve as a pickup point for customers or a way to facilitate delivery.
The demise of RadioShack is separate from its real estate portfolio -- the company has a large number of small-footprint stores in prime locations that should fit Amazon's needs perfectly (33 stores in Manhattan alone).Amazon should only need to pick up a few of these locations to have its desired impact. Rival Apple, similarly, has been very particular with its site selection.
This is a bold move for Amazon, but this company is no stranger to disruption. It is too soon to say what, if anything, will comes of these talks, but a move like this could create yet another outlet for future sales growth.
The article Amazon Could Be the Next Brick-and-Mortar Giant originally appeared on Fool.com.
Jeremy Bowman owns shares of Apple. The Motley Fool recommends Amazon.com, Apple, Google (A shares), and Google (C shares). The Motley Fool owns shares of Amazon.com, Apple, Google (A shares), and Google (C shares). Try any of our Foolish newsletter services free for 30 days. We Fools may not all hold the same opinions, but we all believe that considering a diverse range of insights makes us better investors. The Motley Fool has a disclosure policy. | 科技 |
2016-40/4016/en_head.json.gz/5059 | Russian Soyuz Rocket Launches New European Weather Satellite
Print Metop-B was launched today, 17 September, from Baikonur in Kazakhstan. The Soyuz rocket lifted off at 18:28 CEST. Carrying a suite of sophisticated instruments, Metop-B will ensure the continuity of the weather and atmospheric monitoring servic
(EUMETSAT)
DARMSTADT, Germany — Europe’s second polar-orbiting meteorological satellite, Metop-B, was successfully placed into orbit Sept. 17 by a Soyuz rocket operating from Russia’s Baikonur Cosmodrome in Kazakhstan. Officials at the European Space Agency’s Esoc space operations center here, which has responsibility for Metop-B’s postlaunch operations phase, confirmed that ground stations had received a signal that the satellite was healthy in orbit. The 4,082-kilogram satellite, carrying 11 observing instruments from Europe, the United States, Canada and France, will operate in an 820-kilometer polar low Earth orbit. After six months of in-orbit tests, it will monitor weather conditions in tandem with the nearly identical Metop-A launched in October 2006. Metop-A continues to operate with all instruments functioning despite having been in service for nearly a year longer than its contractual design life of five years. Metop-C, a third identical satellite, is being placed into storage and is scheduled for launch between late 2016 and late 2018. [Launch Photos: MetOp-B Satellite Blasts Off] Metop-B had been removed from storage and prepared for a June launch that was canceled because of a dispute between Russia and Kazakhstan about compensation for rocket debris falling on Kazakh soil during Soyuz liftoffs into polar orbit. Alain Ratier, director-general of Eumetsat, Europe’s 26-nation meteorological satellite organization, said the four-month delay has cost Eumetsat more than 10 million euros ($12.5 million). During a press briefing at Eumetsat headquarters here, Ratier said Eumetsat and industry are still negotiating who will pay these costs. In commercial launch contracts of this type, it is customary that the launch-service provider, in this case the French-Russian Starsem joint venture, not be held liable for the costs of delays. The entire Metop program — the three satellites, their launches and the related ground infrastructure — cost 3.2 billion euros when adjusted for inflation at 2011 economic conditions. Eumetsat paid 75 percent of this, in keeping with the organization’s long-established relationship with the 19-nation European Space Agency (ESA). ESA finances the design and procurement of the Eumetsat satellites. In the case of Metop, ESA is also paying for the development of three of the satellites’ observing instruments. The U.S. National Oceanic and Atmospheric Administration (NOAA), as part of a joint U.S.-European partnership in polar-orbiting meteorological satellites, is furnishing four instruments. Eutemsat is contributing to NOAA’s polar-orbiting satellites in return. ESA is preparing to ask its member governments in November to approve funding to begin work on the second-generation Metop satellites. ESA Director-General Jean-Jacques Dordain said in a briefing here before the launch that the agency will seek 800 million euros in Metop Second Generation funding in November. The first launch would be in 2020. Dordain said that even in a time of economic hardship for many ESA member governments, there is a consensus on the necessity of maintaining the meteorological satellite program. Ratier said Eumetsat has received authorization to proceed with early work on the Metop Second Generation system despite the fact that it has not received 100 percent of the needed funding. Several cash-strapped Eumetsat members — Spain, Ireland, Portugal and Greece among them — had been unable to commit to the program until recently. Ratier said Eumetsat has received 98 percent of what it needs to begin its Metop Second Generation studies following funding commitments from Spain, Ireland and Portugal. ESA is also asking its governments for 1.6 billion euros over four years to fund experimental Earth observation satellites in ESA’s Explorer series. Finding support for this program has proved difficult, and ESA officials are now talking about scaling back the program to secure support for it. This story was provided by Space News, dedicated to covering all aspects of the space industry. Trending in Science
Argentine findings suggest earlier human presence in Americas
Linguists in the UK say 'th' sounds in English are disappearing
Out with a bang: Rosetta crashes into comet
Humans are natural killers, but we're not the worst | 科技 |
2016-40/4016/en_head.json.gz/5142 | BREAKING: Scientist claims comet ISON has companions!
Dr. Astro Post Content
...Well, the only way to know this for sure will be when it is closer to Mars, to us...etc Maybe from September we'll have a better data to contrast. Quoting: pstrusi No, we already know this for sure, the astrophotographer himself confirmed it. It's hot pixels, end of story. Quoting: Dr. Astro No, we know of the hot pixels, we can't be 100% sure ISON is alone until it gets closer is what they are saying. Quoting: --Voltaic-- Oh really? You speak for the other poster? He said the only way to know "this" in direct response to what I was talking about. I was talking about the hot pixels in the image.And I agree, there could be things hitching a ride behind ISON. Quoting: VoltaicLet's assume for a moment ISON is a fairly big comet at about 20km diameter for its physical nucleus. Its volume would then be about 4,188,790,200,000 m^3. If we assume a density of about 1500 kg/m^3 (and that's extremely generous, realistically it'd be closer to 400 kg/m^3) then the total mass is about 6.2831853 x 10^15 kg. If an object were orbiting a comet with that mass at any kind of distance, it wouldn't remain in orbit of it. If an object tried to orbit it at the distance that the moon is from earth, for instance, then it would take over 2000 years to complete a single orbit. It wouldn't even still be orbiting it though, it would have been stripped off by the sun by now. The comet is currently about 5 AUs from the sun. The current difference in gravitational acceleration due to the sun over the distance of the object orbiting the comet would be about 2.25 x 10-10 m/s^2. That may not sound like much, but it's two orders magnitude higher than the acceleration such an object would experience from the comet, which would only be about 2.84 x 10^-12 m/s^2. In other words, even if an object were "trailing" it as close as the moon is from earth, it would have been peeled off by the sun a while ago. At further orbital distances it would have been ripped away sooner still. In order to stably orbit the comet, an object would have to assume a much lower orbit and be virtually indistinguishable from the comet's nucleus. McCanney has claimed that the comet is being orbited by objects at two lunar distances. That is in direct contradiction to the evidence. The comet would have to be much, much more massive for that to be the case, and that is his claim indeed. He claims it's the size of the earth or larger. He's STILL claiming Pete's image of the comet showed "companions" but now he's now gone to claim that Pete's images were a NASA hoax to bait him into claiming the presence of companions so that they could then debunk it. He's a full-on nutjob.ISON may also break up into several smaller pieces as it gets closer to the Sun. Noting ISON hasn't been "melted" yet by the Sun as far as we know. Quoting: VoltaicThings in space don't melt, they sublimate, and the comet is sublimating which is why it has a coma, it's just not all that much yet. Yes it could break up into pieces, and then those pieces would continue along the original orbit of the comet. What does that have to do with anything McCanney claimed? Quoting: Dr. Astro By the way, that nutjob McCanney is claiming that Pete's images were a hoax to bait him into claiming his images showed companions and that this NASA video shows Pete's images without the companions as a way to debunk his claim: [link to www.youtube.com] He's lying and hoping people like me don't notice. The hot pixels are still there in the above NASA video. Pete's time lapse is the first segment show, as per McCanney's description, but the hot pixels are still there. | 科技 |
2016-40/4016/en_head.json.gz/5154 | Google expands its US e-commerce and delivery serv...
Google expands its US e-commerce and delivery service in the Midwest
Customers in parts of Wisconsin, Michigan, Illinois, Indiana, Ohio and Iowa can now access the service
Zach Miners (IDG News Service) on 08 September, 2015 22:53
Google is expanding its U.S. online shopping and delivery service, by making overnight deliveries of retail items available in six states in the Midwest.Google Express, which lets customers shop for items, including dry foods, from a range of retailers, is already available in more than a half-dozen cities, including San Francisco, Chicago, Boston and Washington, D.C. It includes same-day and overnight delivery of products.But Google is branching out. The delivery service, which encompasses a variety of products, is available now in parts of Wisconsin, Michigan, Illinois, Indiana, Ohio and Iowa.Partner retailers include Ace Hardware, Costco, Kohl's, Toys "R" Us, Treasure Island Foods and Walgreens, Google said in a blog post on Tuesday.Google will also start testing a new fresh-food grocery delivery service in San Francisco and another U.S. city later this year, according to BloombergBusiness. Google did not immediately respond to a request for comment.
With the expansions, Google is broadening its ambitions in delivery services to better compete against rivals like Amazon and Instacart. AmazonFresh provides same-day and early morning delivery of groceries and other local products in a handful of U.S. cities. And on Tuesday, Amazon announced a restaurant delivery service in Seattle.In addition to the Midwest, Google's Express service offers overnight delivery of products in parts of Northern California. Those products can be delivered the next day or several days after they're bought. Deliveries are made on Saturdays but not Sundays.
Tags Google
Zach Miners | 科技 |