Archive for November, 2009

[#SmartGrid #スマートグリッド] PRIME Alliance: スマートメータの通信規格標準化団体=>

November 30, 2009

Prime Allianceは、オープンで、公開されたスマートメータ間の通信規準を制定することを目的に作られた組織。

規格は、802.11a/gやパワーラインネットワークにも採用されているOFDM(Orthogonal Frequency Division Multiplexing)というスキームを採用しており、有線/無線共通のインタフェースとして評価されている。 




Posted via email from Ippei’s @CloudNewsCenter info database

[#SmartGrid #スマートグリッド] スマートグリッドで立ち遅れている領域:家電品のネットワーク標準=スマートメータとの接点=>

November 30, 2009


家電品の立場からすると、安価なソリューションであることが絶対条件。 一方、多くの提案されているネットワークプロトコルは、家庭内のビデオ配信も出来るような高速なネットワークが多く、コストがそうしても高くなる、というジレンマがある。



スマートメータ業界もPrime Allianceとい業界団体を作り、この課題について議論している。

Smart grid hits snag over powerline standard
Consumer OEMs lack a home net for smart appliances

Rick Merritt
  SAN JOSE, Calif. — The U.S. government is grappling with one of the first big snags in the wake of awarding $4 billion in grants to build a smart electric grid. It could take years before there are any low cost appliances for Joe Consumer to plug into an intelligent power network.

The vision of a smart grid includes smart appliances that automatically turn on or off in response to fluctuating energy prices as electric demand peaks and troughs. Intelligent fridges, dryers and other energy hogs could help utilities reduce their needs for new power plants, help consumers save money and ease stress on the environment.

To enable this so-called demand response application, appliance makers need an easy, low cost way to plug into the grid. Today they face as many as a dozen wired and wireless choices, most of them far too expensive and high bandwidth, focused on carrying digital music and video around the home.

In an effort to fill the gap, a senior government technologist laid down the gauntlet before a recent gathering of powerline networking engineers: create a standard to plug appliances into tomorrow’s smart grid soon–or Uncle Sam may do it for you.

Industry representatives at the meeting said the government is over-stepping its boundaries. Forcing the handful of powerline technologies in the market to converge makes no sense, they added.

Government planners knew a lack of standards was one of the big issues preventing the move to a digital, networked power grid. Before its first economic stimulus grant went out, it spent $10 million to launch a new smart grid standards effort organized by the National Institute of Standards and Technology.

NIST is now reviewing industry feedback on a first draft framework for smart grid standards. As a next step, NIST convened in Denver in mid-November a so-called Smart Grid Interoperability Panel of diverse stakeholders to drive the standards work forward.

SGIP includes fifteen Priority Action Plan committees focused on some of the thorniest standards issues ahead. At the first meeting of the PAP-15 group focused on powerline home networks, the national coordinator of smart grid standards at NIST, George Arnold, gave the group his ultimatum.

“It was a pretty competitive group to put in one room,” quipped Arnold. “I almost thought I would have to hire a security guard.”

In the meeting Arnold said NIST could use its experience selecting the AES security standard as a model for creating a technology bake-off for a low cost powerline standard for smart appliances.

“The moment where he most needed a security guard was when he said if you guys can’t get together [on a standard], NIST will decide,” said Stefano Galli, a lead scientist at Panasonic R&D Co. of America who attended the meeting. “There was an uproar.

“Big companies feel this weight of a government decision is not beneficial,” said Galli who also co-chairs an IEEE task force exploring communications standards for the smart grid. “A premature convergence of the home network could create more problems than it solves,” he added.

“I would rather let the market decide, and I think the 70 companies in the HomePlug Alliance would rather have the market decide, too,” said Rob Ranck, president of HomePlug which has released multiple generations of home powerline networking standards. “We’ve met with [regulators], the Department of Energy and Congressional staff members and there doesn’t seem to be a clear consensus that [picking a standard] is NIST’s role,” he said.

“If you go back to the legislation that supports NIST, it talks about interoperability, but it’s not clear whether it sets up the federal government to pick a single winner,” Ranck added.

The industry has tried and failed for years to set a single powerline home networking standard. After four years of work, the IEEE 1901 group is about to finalize a standard that essentially blesses multiple powerline physical layers and media access controllers.

The stakes are as high as the difficulties. The government is keen to ensure its $4 billion in recent smart grid grants is money well spent. Many of the projects include pilot projects in demand response systems.

White goods giant Whirlpool Corp. made a high-profile promise last fall that it will ship in 2011 a million dryers ready to plug into the smart grid—if there is a suitable networking standard the company can use.

“The last thing I want to see is a Wall Street Journal article a year from now saying Whirlpool had to renege on its promise because of the lack of a standard,” said Arnold.

From NIST’s perspective, “the ideal would be to have one wired and one wireless standard,” Arnold told EE Times.

Members of competing Wi-Fi and Zigbee trade groups also attended the meetings in Denver. Some expressed they disagreed with the idea of a single wireless standard—even if their approach was picked.

A host of approaches are up for consideration as a wired standard for networking appliances. But all have their drawbacks.

“Everywhere you look there is something missing,” said Galli of Panasonic.

Ranck of HomePlug said his group’s technology has the broadest backing. It is part of the pending IEEE 1901 standard that provides a mechanism for at least two competing powerline approaches to at least not interfere with each other.

“I think coexistence is achievable, and we will push for coexistence at a minimum,” Ranck said.

However Galli said the 1901 spec makes no mention of smart grid applications, and it mandates a minimum 100 Mbit/second connection. “It’s too expensive to put 100 Mbits/s in an appliance, its overkill,” Galli said.

In an October meeting the 1901 group also dropped plans to support another major contender, the ITU standard for wired home networks. The standard specifies data over powerline and other wired media. It supports both a low-cost profile for a few Mbits/s and has attracted broad support in the past year.

But the HomePlug camp is not prepared to back which was adopted than dropped as part of the IEEE 1901 spec. Ranck of HomePlug noted there are no products available yet and the spec may not be formally complete until June.

Engineers could define a generic network socket for appliances and build a variety of different modules based on the IEC 62480 standard for network adapters, Galli said. Each module could cost less than $20 and support a particular wired or wireless network, he added.

However, Arnold charged the PAP-15 group to define a standard that would not require consumers to buy different adapters if they move from one region to another.

Another approach would be to follow the example of Europe that uses the low frequency (9-150 KHz) spectrum to handle the kind of low bit-rate traffic smart appliances need. Galli noted that the U.S. makes this band available for data up to 490 KHz, potentially enabling links supporting data rates up to 500 Kbits/s.

The Prime Alliance–which includes smart meter makers Itron and Landis+Gyr as well as chip makers STMicroelectronics and Texas Instruments–is working to define specs in this area. However, there are few PHY and MAC standards for such data links, Galli said.

The LonWorks technology of Echelon (San Jose) that delivers about 15 Kbits/s is another option. It is used widely in Italy to handle simple apps such as reading meters remotely, however Ranck said it lacks wide support from other vendors.

Galli noted the Denver meeting was the group’s first, with many more to come. He was also quick to point out that the demand response application in the home is just one part of the overall smart grid effort.

“I feel there is undue attention on the home, and other important issues have not received the same level of attention,” Galli said.

Indeed, the Denver meeting included first gatherings for most of the other 14 high priority gap areas NIST has identified.

All materials on this site Copyright © 2009 TechInsights, a Division of United Business Media LLC. All rights reserved.

Posted via email from Ippei’s @CloudNewsCenter info database

[#Cloud #クラウド] クラウドアプリの認証: Signify社が複合認証をSaaSサービスとして提供=>

November 30, 2009


Secure Access to SaaS Applications

Static passwords replaced by two-factor authentication for applications in the cloud

25 November 2009: With the growing demand for web-based SaaS applications, UK-based Signify has extended its two-factor authentication (2FA) hosted services to provide secure access to cloud-based applications such as and Google Apps. While 2FA is becoming the de-facto standard for remote access to server-based business applications, most SaaS solutions including and Google Apps still only provide authentication with static passwords that can be easily compromised.

In addition, the new Signify SaaS Login component of the service allows users to identify and authenticate themselves just once for access to all their network or cloud-based applications using a single set of two-factor authentication credentials. This increases the level of protection for corporate applications and data in the cloud by providing fast and convenient token or tokenless authentication.

“With the growing popularity of corporate SaaS applications such as and Google Apps that present access to potentially sensitive data in the cloud, it is an anomaly that most enterprises still rely on just a user name and password for authentication,” said Dave Abraham, CEO at Signify. “SaaS Login is designed to fill in this security blind spot with strong two-factor authentication and comply with industry policies and guidelines that increasingly specify 2FA for remote access. Many organisations do not realise that this includes access to SaaS applications.”

Using the SAML (Security Assertion Mark-up Language) authentication protocol, the new SaaS Login component integrates Signfy’s 2FA hosted services with SaaS applications. This enables users to log in using their existing two-factor token-based or tokenless credentials. Once logged in securely, Signify then allows easy ‘one click’ sign-on to each cloud or SaaS application, without requiring further authentication.

Signify’s flexible and reliable hosted service also makes it cost effective, fast and simple to deploy and manage 2FA across an organisation of any size. Without the need to buy new equipment, integrate with existing systems or handle the implementation of the SAML authentication protocol in-house, Signify ensures a low cost of deployment and total cost of ownership. And importantly, Signify hosted services are easy to manage, requiring no additional investment in specialist in-house skills.

About Signify
Headquartered in Cambridge, UK, Signify helps organisations to secure their computer networks. Signify provide a secure alternative to passwords that safely enables remote access to systems and information by delivering two-factor authentication as an on-demand hosted service.

Since 2000, Signify has built an outstanding reputation for delivering secure, reliable and flexible two-factor authentication which is quick and easy to deploy. Signify have an extensive client base across many sectors including major multi-nationals, small and medium sized businesses, professional services, central government and local authorities.’

For more information visit or contact:
Allie Andrews
Tel: 01442 245030

Stuart Howden
Tel: 01223 472582

read more

Latest News from Cloud Computing Journal / Wed, 25 Nov 2009 11:00:00 GMT

Sent from FeedDemon

Posted via email from Ippei’s @CloudNewsCenter info database

[#datacenter データセンタ] GartnerがPUE/DCiEに取って代わる新しいデータセンタグリーン指標を提案: PPE =>

November 30, 2009
PUEやDCiEの問題は、IT機器のデーアセンタ内のエネルギー消費率を計測するのに優れているが、そのIT機器が実際にどれだけITサービスを提供しているか、つまりどれだけのCPU効率で稼動しているか、を全く考慮していない。同じPUEのデータセンタでも 10倍以上のコンピューティングサービスを提供するケースが起こりうる。 

PPEは、Power to Performance Efficiencyの略。  上記のエネルギー消費率を計算すると共に、サーバ機器の稼働率と集約度をファクターとして取り入れ、数値として計算する方式。 正確な方程式については本記事には記載されていないけど、今後Gartnerはこの数値を利用するケースが増えそうだ。

Data Center Efficiency – Beyond PUE and DCiE

February 15th, 2009 · 2 Comments

Many IT organizations today are being asked to do more with less, reducing budgets or perhaps curtailing data center expansion projects altogether. Faced with the harsh realities of a difficult economic climate data center managers will need to focus on creating the most efficient operating environments in order to extend the life of existing data centers.  These efficiencies can be gained through many avenues; increasing compute densities, creating cold aisle containment systems, more effective use of outside air, but the key component over time will be having an easily understood metric to gauge just how efficient the data center is, and how much improvement in efficiencies have been created on an ongoing basis.

What’s the issue? With the increased awareness of the environmental impact data centers can have there has been a flurry of activity around the need for a data center efficiency metric.  Many that have been proposed, including PUE to DCiE attempt to map a direct relationship between total facility power delivered and IT equipment power used.  While these metrics will provide a high level benchmark for comparison purposes between data centers, what they do not provide is any criteria to show incremental improvements in efficiency over time.  Most importantly, they do not allow for monitoring the effective use of the power supplied – just the differences between power supplied and power consumed. There is no negative, or positive effect of more efficiently utilizing the compute resources at hand, so a data center with a PUE of 2.0, running all x86 servers at 5% utilization, could have the same PUE as another data center running the same number of servers at 50% utilization, effectively producing 10 times the compute capacity as data center 1.

The PPE Metric: A more effective way to look at energy consumption is to analyze the effective use of power by existing IT equipment, relative to the performance of that equipment.  While this may sound intuitively obvious (who wouldn’t want more efficient IT).  A typical x86 server will consume between 60% and 70% of its total power load when running at low utilization levels.  Raising utilization levels has only a nominal impact on power consumed and yet a significant impact on effective performance per kilowatt. Pushing IT resources towards higher effective performance per kilowatt can have a twofold effect of improving energy consumption (putting energy to work), and also extending the life of existing assets through increased throughput.

If major IT assets were evaluated in this manner it becomes clear that not only can more efficient environments be created, but that individual asset utilization levels can be increased, effectively improving the performance per square foot within the data center, and potentially deferring the construction of a new data center.
At Gartner we have created a metric to help demonstrate this effect,  Called the Power to Performance Effectiveness metric (PPE), it was developed to help identify at the device level, where efficiencies could be gained.  Unlike other metrics the PPE does not compare actual performance to hypothetical maximums, but rather is designed to allow the user to define their own optimal maximum performance levels, and then compare average performance against the optimum.

There are three critical components that come into play, only one of which is out of the primary control of IT;   rack density levels, server utilization levels, and energy consumption.  Rack density levels are usually mandated by IT management and as often as not are defined based on power levels and the potential heat load that might be generated by specific rack densities. In a typical data center today rack densities of 50%-60% are very common, yielding an average of 25 1U server slots per rack.  Server utilization, especially in x86 environments, is often at the low end of the performance range, averaging between 7 and 15% in many organizations today.  One of the key drivers for virtualization has been to improve these performance levels, driving servers up towards 60%-70% average utilization.  Driving these servers to higher utilization levels does not dramatically increase power consumption, but PPE is designed to capture that as well.  Therefore optimal power can be defined as not total compute output – but realistic compute output – compared to energy used.

Since PPE looks at not only energy required but asset utilization levels, the results point out rather quickly \the percentage of growth available within this existing configuration.  With a combination of higher virtualization levels and increased rack densities it’s likely most rack environments will support existing growth rates for quite some time.   And yes, we must assume that both power and cooling are available to support these higher densities.  If not, an analysis of the cost to add additional power and cooling vs. the cost to build out a new data center might in fact change the overall decision making process.

Bottom Line: PPE is not the end-all of power and performance monitoring, but was designed to give IT Managers a view of performance levels within their data centers, and a means to compare that performance to realistic potential (optimal) performance levels, rather than just using a hypothetical maximum.  Using PPE on an ongoing basis will yield a clear view of how power and performance use is changing over time, and how an organizations overall data center efficiency is improving.  A detailed review of PPE will be published on the Gartner research site shortly.

Posted via email from Ippei’s @CloudNewsCenter info database

[#datacenter データセンタ] 北米は防空壕がデータセンタの対象に:ネブラスカ州、Omaha市郊外の防空壕にYahooもGoogleも注目=>

November 30, 2009
防空壕は、地下の作られるため、天候の影響を殆ど受けない上、地上の気温の変化の影響も受けず、安定した温度を保つのに最適。  とある調査によると、実質電気量は、地熱発電などのサポートにより、3セント/kWh、と以上に低い。 


Recession slows plan to build bunker data centers

HASTINGS, Neb. — The precise spacing between the mounds of earth that decorate miles of central Nebraska farmland east of Hastings hint at their original purpose as Naval ammunition bunkers during World War II.

After the base closed in 1966, more than half of the 49,000 acres became a USDA lab, but most of the remaining concrete bunkers sold and have served as solidly built hog-feeding or storage barns.

Now, a new company is hoping some of the same things that attracted the military to the area — abundant, cheap power and a secure, central location — will entice companies to invest in the bunkers to house their electronic data.

But even with Google and Yahoo opening up new data centers in the Omaha area — just a couple hours east of the bunkers — the recession has made it hard for Prairie Bunkers to get started.

“Everything about this is absolutely perfect, except the economy,” said Pam Brown, a former state senator who is CEO of the bunker company that launched last year.

A number of planned data center projects nationwide have been put on hold because of the economy, said John Boyd Jr., whose Boyd Co. advises companies about data center locations.

Tech heavyweights that sell the computing equipment used in data centers have said recently that businesses still appear reluctant to invest in technology. IBM, Intel, Cisco and Hewlett-Packard have all suggested business spending on technology might not recover until sometime in 2010.

Both Boyd and Brown said they expect demand for data centers to grow because President Barack Obama’s proposed financial services reforms will likely add new record-keeping requirements.

The push for more electronic health care records also will lead to demand for more data centers because more server space will be needed as paper records are converted.

“Companies cannot ignore the need to keep data secure,” Boyd said.

Nebraska offers relatively low operating, land and power costs, so Boyd said it makes sense as a location for data centers.

“Central Nebraska clearly fits the profile of where data centers are being built,” Boyd said.

It helps that two well-known Internet companies have already committed to building data centers on the plains about 140 miles east of the bunkers. Yahoo said a year ago that it plans to build a data center in the Omaha suburb of La Vista. And Google opened a $600 million data center earlier this year in Council Bluffs, Iowa — just over the Missouri River from Omaha.

Plus Nebraska is geographically neutral in relation to the coasts and it’s got a major fiber optic pipeline that crosses the state and comes within about 200 yards of the bunkers.

Prairie Bunkers officials hope the strengths of their bunkers, which were built to withstand a direct hit from a 500 pound bomb, will overcome the challenges the economy has created.

The bunkers are sturdy enough to handle the main weather concern of the Plains, tornadoes, without problems, Brown said. And the company plans to equip all the bunkers with backup generators and make sure the entire site has redundant power and fiber optic lines to limit outages.

Company officials estimate power at the bunker site will cost about 3 cents per kilowatt hour with a geothermal heating and cooling system. Solar and wind power also might be options, but that will be left up to the company building each data center. A couple existing data center companies already offer secure server space in bunkers or caves, but Brown said none of those are offering a build-to-suit arrangement like Prairie Bunkers’.

“Our niche is a little different, and there is not really anything like us in the world,” Brown said.

The company appears close to signing its first bunker-development deal, but Brown declined to discuss the details because the negotiations are confidential.

Terry Franks with Data Power Technology came away impressed after his first visit to the formidable 50-foot-by-100-foot bunkers this fall.

The bunkers’ walls are a foot thick, and the concrete ceiling is 18-inches-thick with 2-3 feet of dirt and grass atop it. Down the middle of the bunker, two rows of 2-foot-square concrete columns support the weight above.

Any sound bounces off the concrete walls inside the bunkers and reverberates again and again.

“My first impression is wow. To build something of this magnitude today would take a lot of dollars,” said Franks, who hopes to supply equipment to the firm.

Prairie Bunkers has the rights to 184 bunkers that stretch over five miles. The bunkers the company is marketing now are at the easternmost edge of the old ammunition depot, farthest away from where the munitions were produced when the land was part of the nation’s largest inland naval ammunition depot.

The company believes the bunkers will make perfect vaults to protect data centers.

“Data is more valuable than gold, and we intend to make sure that we treat it that way,” Brown said.

On the Net:

Copyright © 2009 The Associated Press. All rights reserved.

Posted via email from Ippei’s @CloudNewsCenter info database

[#datacenter データセンタ] フィンランドのグリーンデータセンタ:教会の地下に作り、発生熱を地域の暖房に割り当てる、という計画=>

November 30, 2009
ヘルシンキに在る、Upenski Cathedralと呼ばれる、観光客にも有名な教会の礼拝堂の地下に設置される計画で、来年の1月にも操業開始予定。


ヨーロッパはこういった多くの古い建造物が多く、データセンタの利用価値としても評価が出来る可能性が高い。 古都ならでは発想で面白い。

Cloud computing goes green underground in Finland

Sun Nov 29, 2009 7:15pm EST

By Tarmo Virki

HELSINKI (Reuters) – In the chill of a massive cave beneath an orthodox Christian cathedral, a city power firm is preparing what it thinks will be the greenest data center on the planet.

Excess heat from hundreds of computer servers to be located in the bedrock beneath Uspenski Cathedral, one of Helsinki’s most popular tourist sites, will be captured and channeled into the district heating network, a system of water-heated pipes used to warm homes in the Finnish capital.

“It is perfectly feasible that a quite considerable proportion of the heating in the capital city could be produced from thermal energy generated by computer halls,” said Juha Sipila, project manager at Helsingin Energia.

Finland and other north European countries are using their water-powered networks as a conduit for renewable energy sources: capturing waste to heat the water that is pumped through the system.

Due online in January, the new data center for local information technology services firm Academica is one way of addressing environmental concerns around the rise of the internet as a central repository for the world’s data and processing — known as “cloud computing.”

Companies seeking large-scale, long-term cuts in information technology spending are concentrating on data centers, which account for up to 30 percent of many corporations’ energy bills.

Data centers such as those run by Google already use around 1 percent of the world’s energy, and their demand for power is rising fast with the trend to outsource computing.

One major problem is that in a typical data center only 40-45 percent of energy use is for the actual computing — the rest is used mostly for cooling down the servers.

“It is a pressing issue for IT vendors since the rise in energy costs to power and cool servers is estimated to be outpacing the demand for servers,” said Steven Nathasingh, chief executive of research firm Vaxa Inc.

“But IT companies cannot solve the challenge by themselves and must create new partnerships with experts in energy management like the utility companies and others,” he said.

Data centers’ emissions of carbon dioxide have been running at around one third of those of airlines, but are growing 10 percent a year and now approach levels of entire countries such as Argentina or the Netherlands.


Besides providing heat to homes in the Finnish capital, the new Uspenski computer hall will use half the energy of a typical datacenter, said Sipila.

Its input into the district heating network will be comparable to one large wind turbine, or enough to heat 500 large private houses.

“Green is a great sales point, but equally important are cost savings,” said Pietari Paivanen, sales head at Academica: the center, when expanded as planned, will trim 375,000 euros ($561,000) a year from the company’s annual power bill. Academica’s revenue in 2008 was 15 million euros.

“It’s a win-win thing. We are offering the client cheap cooling as we can use the excess heat,” Sipila said.

The center’s location in the bowels of the cathedral has an added bonus: security. It is taking over a former bomb shelter carved into the rock by the fire brigade in World War Two as a refuge for city officials from Russian air raids.

(Editing by Sara Ledwith)

© Thomson Reuters 2009. All rights reserved. Users may download and print extracts of content from this website for their own personal and non-commercial use only. Republication or redistribution of Thomson Reuters content, including by framing or similar means, is expressly prohibited without the prior written consent of Thomson Reuters. Thomson Reuters and its logo are registered trademarks or trademarks of the Thomson Reuters group of companies around the world.

Posted via email from Ippei’s @CloudNewsCenter info database

[#datacenter データセンタ] 省電力だけではグリーンとは言えない:電力/リサイクル/水/ゴミなど他の環境項目を重視したLEED規格の重要性:=>

November 30, 2009
Digital Realty Trust社が立て続けにLEED Platinum Certification(最高位)のデータセンタをSanta Claraに開設した事が記事になっている。 同社は、LEEDと呼ばれるビル向けの環境規格に対する対応は過去から注力しており、特に強みとしているのは、それを短時間で建設してしまう、という技法を編み出していること。 


Winning Data Center customers, being an entrepreneur – Digital Realty Trust leading LEED certification

I was having a discussion with a data center manager and made the point that what is interesting is whether people are being data center managers or data center entrepreneurs.  Good leaders know how to do both and when it is appropriate.

A well written paper on entrepreneurship is Saras D. Sarasvathy.

So, what makes entrepreneurs entrepreneurial?

Entrepreneurs are entrepreneurial, as differentiated from managerial or strategic, because they think effectually; they believe in a yet-to-be-made future that can substantially be shaped by human action; and they realize that to the extent that this human action can control the future, they need not expend energies trying to predict it. In fact, to the extent that the future is shaped by human action, it is not much use trying to predict it – it is much more useful to understand and work with the people who are engaged in the decisions and actions that bring it into existence.

An example of entrepreneurial is Digital Realty Trust’s efforts to have the highest LEED ratings for its portfolio of data centers. I’ve sat in many discussions where the cost benefit analysis is done for LEED points.  I would imagine the conversations at Digital Realty Trust are different when they say “We will be Platinum.  Figure out how to make it cost effective.”

DataCenterKnowledge and others are helping to market LEED buildings.

In Santa Clara, ‘Green’ Speeds Toward Platinum

November 24th, 2009 : Rich Miller


1201 Comstock is one of the Digital Realty data centers in Santa Clara, Calif. that has received a LEED Platinum certification.

SANTA CLARA, Calif. – An unassuming industrial park near San Jose Airport is hardly where you’d expect to find some the greenest acres in the data center industry. The Santa Clara, Calif., campus operated by Digital Realty Trust is home to three data centers with a Platinum or Gold rating under the LEED standard for energy efficient buildings.

What is the value of the Platinum and Gold ratings? Well Digital Realty has some big brand companies in the space.

Digital Realty, the largest data center operator in the U.S., has used its Santa Clara operation to refine an energy efficient design using fresh-air cooling, which has made the site a magnet for some of the fastest-growing companies in the digital economy.

Facebook, Twitter and Yahoo all have their servers housed here. NVIDIA, which is seeking to harness its GPU technology to power cloud and high performance computing, has leased an entire data center here as well.

And, makes Gold LEED a me too statement, and the new news is about platinum.

Where LEED certification was once seen as a difficult hurdle for mission-critical sites, companies like Digital Realty are demonstrating the ability to build data centers to the very highest levels of the specification, and do so with remarkable speed. The two LEED Platinum facilities at the Santa Clara campus were completed in less than eight months, far less than the 18 to 24 months typically required for an enterprise data center project.

As DataCenterDynamics quotes.

“Achieving LEED platinum certification means that attention has been paid to every aspect of the building’s design and construction, including the operating energy efficiency of the finished datacenter as well as often overlooked, key issues such as building materials, materials re-use and construction practices,” DRT CTO Jim Smith said in a statement. DRT announced the achievement on Monday.

Digital Realty Trust is able to market its capabilities above the rest and first to market.  That’s being an entrepreneur.

I wonder how much interest there is now in Digital Realty’s POD data centers.

Digital Realty employs a Turn-Key Datacenter design offering customer data center pods available in units of 1.125 megawatts.

Green Data Center Blog / Wed, 25 Nov 2009 17:54:16 GMT

Sent from FeedDemon

Posted via email from Ippei’s @CloudNewsCenter info database

[#Cloud クラウド速報] 次世代データベースはHadoopだけではない。他にもいろいろ登場しており、その代表格、HBaseのCassandraの比較=>

November 30, 2009




HBase vs. Cassandra: NoSQL Battle!

Distributed, scalable databases are desperately needed these days. From building massive data warehouses at a social media startup, to protein folding analysis at a biotech company, “Big Data” is becoming more important every day. While Hadoop has emerged as the de facto standard for handling big data problems, there are still quite a few distributed databases out there and each has their unique strengths.

Two databases have garnered the most attention: HBase and Cassandra. The split between these equally ambitious projects can be categorized into Features (things missing that could be added any at time), and Architecture (fundamental differences that can’t be coded away). HBase is a near-clone of Google’s BigTable, whereas Cassandra purports to being a “BigTable/Dynamo hybrid”.

In my opinion, while Cassandra’s “writes-never-fail” emphasis has its advantages, HBase is the more robust database for a majority of use-cases. Cassandra relies mostly on Key-Value pairs for storage, with a table-like structure added to make more robust data structures possible. And it’s a fact that far more people are using HBase than Cassandra at this moment, despite both being similarly recent.

Let’s explore the differences between the two in more detail…

CAP and You

This article at Streamy explains CAP theorem (Consistency, Availability, Partitioning) and how the BigTable-derived HBase and the Dynamo-derived Cassandra differ.

Before we go any further, let’s break it down as simply as possible:

  • Consistency: “Is the data I’m looking at now the same if I look at it somewhere else?”
  • Availability: “What happens if my database goes down?”
  • Partitioning: What if my data is on different networks?”

CAP posits that distributed systems have to compromise on each, and HBase values strong consistency and High Availability while Cassandra values Availability and Partitioning tolerance. Replication is one way of dealing with some of the design tradeoffs. HBase does not have replication yet, but that’s about to change — and Cassandra’s replication comes with some caveats and penalties.

Let’s go over some comparisons between these two datastores:

Feature Comparisons


HBase is part of the Hadoop ecosystems, so many useful distributed processing frameworks support it: Pig, Cascading, Hive, etc. This makes it easy to do complex data analytics without resorting to hand-coding. Efficiently running MapReduce on Cassandra, on the other hand, is difficult because all of its keys are in one big “space”, so the MapReduce framework doesn’t know how to split and divide the data natively. There needs to be some hackery in place to handle all of that.

In fact, here’s some code from a Cassandra/Hadoop Integration patch:

+ /*
+  FIXME This is basically a huge kludge because we needed access to
+ cassandra internals, and needed access to hadoop internals and so we
+ have to boot cassandra when we run hadoop. This is all pretty
+ fucking awful.
+  P.S. it does not boot the thrift interface.
+ */

This gives me The Fear.

Bottom line? Cassandra may be useful for storage, but not any data processing. HBase is much handier for that.

Installation & Ease of Use

Cassandra is only a Ruby gem install away. That’s pretty impressive. You still have to do quite a bit of manual configuration, however. HBase is a .tar (or packaged by Cloudera) that you need to install and setup on your own. HBase has thorough documentation, though, making the process a little more straightforward than it could’ve been.

HBase ships with a very nice Ruby shell that makes it easy to create and modify databases, set and retrieve data, and so on. We use it constantly to test our code. Cassandra does not have a shell at all — just a basic API. HBase also has a nice web-based UI that you can use to view cluster status, determine which nodes store various data, and do some other basic operations. Cassandra lacks this web UI as well as a shell, making it harder to operate. (ed: Apparently, there is now a shell and pretty basic UI — I just couldn’t find ’em).

Overall Cassandra wins on installation, but lags on usability.


The fundamental divergence of ideas and architecture behind Cassandra and HBase drives much of the controversy over which is better.

Off the bat, Cassandra claims that “writes never fail”, whereas in HBase, if a region server is down, writes will be blocked for affected data until the data is redistributed. This rarely happens in practice, of course, but will happen in a large enough cluster. In addition, HBase has a single point-of-failure (the Hadoop NameNode), but that will be less of an issue as Hadoop evolves. HBase does have row locking, however, which Cassandra does not.

Apps usually rely on data being accurate and unchanged from the time of access, so the idea of eventual consistency can be a problem. Cassandra, however, has an internal method of resolving up-to-dateness issues with vector clocks — a complex but workable solution where basically the latest timestamp wins. The HBase/BigTable puts the impetus of resolving any consistency conflicts on the application, as everything is stored versioned by timestamp.

Another architectural quibble is that Cassandra only supports one table per install. That means you can’t denormalize and duplicate your data to make it more usable in analytical scenarios. (edit: this was corrected in the latest release) Cassandra is really more of a Key Value store than a Data Warehouse. Furthermore, schema changes require a cluster restart(!). Here’s what the Cassandra JIRA says to do for a schema change:

  1. Kill Cassandra
  2. Start it again and wait for log replay to finish
  3. Kill Cassandra AGAIN
  4. Make your edits (now there is no data in the commitlog)
  5. Manually remove the sstable files (-Data.db, -Index.db, and -Filter.db) for the CFs you removed, and rename files for CFs you renamed
  6. Start Cassandra and your edits should take effect

With the lack of timestamp versioning, eventual consistency, no regions (making things like MapReduce difficult), and only one table per install, it’s difficult to claim that Cassandra implements the BigTable model.


Cassandra is optimized for small datacenters (hundreds of nodes) connected by very fast fiber. It’s part of Dynamo’s legacy from Amazon. HBase, being based on research originally published by Google, is happy to handle replication to thousands of planet-strewn nodes across the ‘slow’, unpredictable Internet.

A major difference between the two projects is their approach to replication and multiple datacenters. Cassandra uses a P2P sharing model, whereas HBase (the upcoming version) employs more of a data+logs backup method, aka ‘log shipping’. Each has a certain elegance. Rather than explain this in words, here comes the drawings:

This first diagram is a model of the Cassandra replication scheme.

  1. The value is written to the “Coordinator” node
  2. A duplicate value is written to another node in the same cluster
  3. A third and fourth value are written from the Coordinator to another cluster across the high-speed fiber
  4. A fifth and sixth value are written from the Coordinator to a third cluster across the fiber
  5. Any conflicts are resolved in the cluster by examining timestamps and determining the “best” value.

The major problem with this scheme is that there is no real-world auditability. The nodes are eventually consistent — if a datacenter (“DC”) fails, it’s impossible to tell when the required number of replicas will be up-to-date. This can be extremely painful in a live situation — when one of your DCs goes down, you often want to know *exactly* when to expect data consistency so that recovery operations can go ahead smoothly.

It’s important to note that Cassandra relies on high-speed fiber between datacenters. If your writes are taking 1 or 2 ms, that’s fine. But when a DC goes out and you have to revert to a secondary one in China instead of 20 miles away, the incredible latency will lead to write timeouts and highly inconsistent data.

Let’s take a look at the HBase replication model (note: this is coming in the .21 release):

What’s going on here:

  1. The data is written to the HBase write-ahead-log in RAM, then it is then flushed to disk
  2. The file on disk is automatically replicated due to the Hadoop Filesystem’s nature
  3. The data enters a “Replication Log”, where it is piped to another Data Center.

With HBase/Hadoop’s deliberate sequence of events, consistency within a datacenter is high. There is usually only one piece of data around the same time period. If there are not, then HBase’s timestamps allow your code to figure out which version is the “correct” one, instead of it being chosen by the cluster. Due to the nature of the Replication Log, one can always tell the state of the data consistency at any time — a valuable tool to have when another data center goes down. In addition, using this structure makes it easy to recover from high-latency scenarios that can occur with inter-continental data transfer.

Knowing Which To Choose

The business context of Amazon and Google explains the emphasis on different functionality between Cassandra and HBase.

Cassandra expects High Speed Network Links between data centers. This is an artifact of Amazon’s Dynamo: Amazon datacenters were historically located very close to each other (dozens of miles apart) with very fast fiber optic cables between them. Google, however, had transcontinental datacenters which were connected only by the standard Internet, which means they needed a more reliable replication mechanism than the P2P eventual consistency.

If you need highly available writes with only eventual consistency, then Cassandra is a viable candidate for now. However, many apps are not happy with eventual consistency, and it is still lacking many features. Furthermore, even if writes do not fail, there is still cluster downtime associated with even minor schema changes. HBase is more focused on reads, but can handle very high read and write throughput. It’s much more Data Warehouse ready, in addition to serving millions of requests per second. The HBase integration with MapReduce makes it valuable, and versatile.

Posted via email from Ippei’s @CloudNewsCenter info database

[#SmartGrid スマートグリッド] スマートグリッドのプライバシーの問題:カリフォルニアのケース:プライバシー保護団体がスマートグリッドを調査=>

November 29, 2009
PUC(Pubilc Utilities Commission)という団体が州内のスマートグリッド(一日に13000台のスマートメータを設置中)の推進を行う中、CFC(Consumer Federation of California)やTURN(TへUtilities Reform Network)といって団体がそのプライバシーの問題について現状や今後懸念すべき問題について超際している。 


  • 旅行会社: 家庭の電力使用状況を見て、祭日のパターンを読み、旅行の広告を出す。
  • 警察: 家庭内で何をやっていたかの裏づけに情報を利用
  • 弁護士: 家庭内での生活状況を調査
  • 保険会社: 同様に家庭内の生活状況に情報を利用
  • 犯罪者: 家庭にいない時間を予測
  • 家主: テナントの生活状況を調査

Privacy Challenges and Implications of an Electric “Smart Grid” System

Posted on 29 November 2009

By Zack Kaldveer
Communications Director, Consumer Federation of California
and author of the blog Privacy Revolt

A critically important debate has emerged regarding the privacy implications and challenges that a transition to a smart grid system for electricity poses and how such concerns can be addressed.

In California, as in states across the country, the Public Utilities Commission (PUC) is currently considering how to implement a smart grid electrical system. In response to this rulemaking, and the lack of attention being paid to consumer privacy to date, the Consumer Federation of California (CFC) recently joined The Utilities Reform Network(TURN) in urging the Commission to allow for a more comprehensive review and debate regarding such concerns.

In response, the PUC has agreed to hold separate privacy specific hearings – with accompanying workshops and public comments – at a date to be determined in mid December. While this is a temporary victory for privacy and consumer advocates, enormous challenges remain.

What is a “Smart Grid” and why is it needed?

The ‘smart grid’ is a system which will track each kwh of electricity from the generator to an individual’s home through a series of automated devices. The smart grid will come into our homes through a ‘smart meter’ and a ‘home area network’ which monitors the kwh we use.

This deployment of ubiquitous monitoring technologies will allow utilities to collect, and possibly distribute detailed information about household electricity consumption habits (e.g. ice makers will operate only when the washing machine isn’t, TVs will shut off when viewers leave the room, air conditioner and heater levels will be operated more efficiently based on time of day and climate, etc.) in hopes of reducing and/or better managing electricity usage.

Home gadgets and appliances will be wirelessly connected to the Internet so consumers can access detailed information about their electricity use, and reduce their carbon footprint appropriately.

The “Smart Grid” has been trumpeted by former Vice President Al Gore for years, and our nation’s transition to such a system has accelerated since President Obama announced his plan to repair our country’s crumbling infrastructure – which included $ billions to construct a nationwide “smart grid”.

The potential benefits of a system that allows for such monitoring of electricity flow and control over it are self evident, including: Reducing energy use and CO2 emissions (maybe 20% per home), preventing blackouts, spurring development of renewable energy sources, and improving customer service by locating trouble spots and dispatching maintenance teams to fix the problem (among others).

According to President Obama (and other environmental experts), a smart grid system “will save us money, protect our power sources from blackout or attack, and deliver clean, alternative forms of energy to every corner of our nation.”

A variety of interests – in addition to consumer and environmental – also have tangible reasons to support such a transition:

Utilities could sell, if permitted, the massive amounts of household data they will be capable of gathering; Law enforcement could be able to more easily identify, track, and manage information associated with people, places, or things involved in investigations; and marketers could access consumer data that will enable them to more effectively target their products.

(Note: smart meters have recently received some bad press due to a number of customers in one California town discovering that their energy bills have skyrocketed. A lawsuit has been filed against PG&E.)

Rest assured this transition is already underway: Up to three-fourths of the homes in the United States are expected to be placed on the “Smart Grid” in the next decade. Already, some 8 million “Smart Meters,” have been installed in U.S. homes. There will be an estimated nearly 50 million by 2012. PG&E is installing 13,000 per day in California, and overall, the three major state private utilities will deploy 12 million by the end of 2012.

Privacy Implications of a Smart Grid System

The paradox of a smart grid system is that what will ostensibly make it an effective tool in reducing energy usage and improving our electric grid – information – is precisely what makes it a threat to privacy: Information (ours). It is this paradox that has led some to suggest that privacy might even be the “Achilles’ heel” of the “Smart Grid”.

What are the unintended consequences of such a system? Personal privacy issues routinely arise when data collected is harmless in isolation, but becomes a threat when combined with other data, or examined by a third party for patterns. A few principles we should keep in mind as we develop a regulatory framework for such a transition will be notice, protection and choice.

In particular: How much information should we give up to the grid? Should it be up to the customer to decide? If not, who gets access to that information, for what reason, and what will they be allowed to do with it? How will this information be managed (i.e. how long stored?)? And how well will it be protected from those that might seek it unlawfully? Can it even be fully protected given the increasing success and technical expertise of hackers?

Because technological innovation will only accelerate, we would do well to consider more than simply the immediate privacy threats posed by current technologies, but also what we know to be just around the corner.

For instance, while the tracking of mere energy usage in one’s home may be of less concern, as home devices become increasingly “smarter”, one can easily envision a technology convergence in which a myriad of gadgets could be used to track more sensitive information. Security technology already exists to monitor presence in homes to detect break-ins.

What else will smart appliances “tell” others about what we do, and when we do it, in our homes?

Such concerns are already being debated by academics and privacy advocates. In addition to taking into account existing privacy protection laws, companies that develop smart grid technology would be wise to anticipate consumer reaction to any system that invades the most precious private space we occupy: our homes.

Utility companies could reconstruct much of our daily lives, from when we wake up, when we go home, when we go on vacation, and when we hit the hot tub to relax. Now consider how much money that information will be worth to third party marketing companies?

Specific examples of “unintended consequences” that may arise if proper attention is not paid to privacy include:

•    Travel agencies might start sending you brochures right when your annual family vacation approaches.

•    Law enforcement officials might use our information against us. Where were you last night? Home listening to music, huh? That’s not what PG&E told us. Or what about the predictable desire of police to locate in-home marijuana growers by monitoring household power usage?

•    Lawyers might seek to subpoena your data in a divorce trial, “You say you’re a good parent, so why is the television on so late on school nights? Were you with someone in the hot tub at 2 AM on Saturday when the kids were gone?”

•    Insurance companies, always seeking to maximize profits by denying coverage or raising premiums, might start developing connections between energy use patterns and unhealthy tendencies.

•    Hackers and criminals might seek to falsify power usage, pass on their charges to a neighbor, disconnect someone else from the grid, and plan burglaries with an unprecedented degree of accuracy.

•    Some consumers are already getting statements that compare their use to their neighbors. Could we see a system develop in which some are penalized unfairly for “wasteful” usage? Will details such as the number of occupants and their occupations (i.e. someone who telecommutes and is on computer all day) be properly taken into account?

•    Landlords might be interested in knowing what’s happening inside their properties.

•    If recent revelations regarding warrantless wiretapping, Patriot Act abuses and increasingly intrusive surveillance techniques are an indicator, we should also expect government agencies to vigorously pursue this data.

•    It’s not hard to envision RFID tagged labels – read by smart meters – on the food and prescription drugs that fill our refrigerators and cabinets. Could that information be sold to marketers too?  Could our health insurance go up because we eat too much unhealthy food? Might we start receiving brochures about prescription drugs that have been targeted for us?

The privacy implications of such a grid strike at the very heart of the Fourth Amendment and a core American value: our right to keep private what goes on in our homes.

Policy Challenges and Solutions

Ideally, the CPUC would adopt the European approach, which binds companies to collect as little information as is necessary to complete a transaction, and they must then delete that data as soon as it is no longer needed – known as “Data Minimization”. But in America, where information itself is a big money industry – and government tends to be pro-business – such an approach is unlikely.

A superior indicator, and a useful case study, can be found in Colorado. The state public utility commission there was convinced by Elias Quinn, from the Center for Environmental and Energy Security (CEES), at the University of Colorado Law School, and author of “Privacy and the New Energy Infrastructure“, to hold separate hearings dealing with privacy concerns related to a smart grid system. Mr. Quinn enumerated four general categories of personal data and its usage, including policy proposals for the Commission to consider adopting that would more adequately protect consumer privacy.

1. Who has access to your data? As one might expect, consumer consent requirements may vary depending on who is seeking your information. Those seeking access to this data were broken up into three categories – with different approaches taken for each (this does not necessarily represent a full endorsement of each of these approaches):

     A. Electric Utilities: The consumer must opt-out if they choose to prevent electric utilities from accessing their data because this information is critical to the deployment of smart grid networks and to operating the next generation distribution systems. Thus when people sign-up for service, they can decline to participate in sharing any data that isn’t necessary to run the system itself.

    B. Automation vendors, smart appliance manufacturers, or other related-but-not-essential companies: A one time Opt-In per manufacturer.

    C. Entities wholly unrelated to electricity provision: Access is only available if the consumer Opts-In on a case-by-case basis. Perhaps such third party entities should also need to demonstrate a good reason to be able to even ask us for that information before bombarding us with requests. So if an insurance carrier seeks to examine a customer’s usage data, the customer will have to be contacted for his/her informed consent first.

I would add an additional category of “data seekers” that deserves special consideration:

    D. Law Enforcement: Law enforcement should be prohibited, by law, from access to our data unless they have a warrant signed by a judge based on already existing reasonable suspicion.

2. How is your data managed? The European Union’s Data Directive has been cited as a good model and consists of the following core principles: [1] data processed fairly and lawfully, [2] sought or collected for specified purposes, and analyzed only for those purposes, [3] merely adequate and not excessive for the purposes motivating its collection, [4] kept accurate, and [5] kept in a form allowing for identification for no longer than necessary.

Electricity customers should also have the right to access or audit their information for accuracy – ideally in real time.

3. How is your data protected? Utilities should be mandated by law, with strong penalties, to protect information against anyone who would seek to monitor/steal/manipulate it. The challenge here then is how to best protect the 1. Security of the Database and 2. Security of the Data in Transit (which could be trickier as it is wireless).

4. What happens if your data is breached: Consumers should be notified immediately in the event that personal information has been obtained by a party without the requisite consent.

Privacy vs. Environment? Or Data Owners vs. Data Profiteers?

How best to implement a Smart Grid system is an issue (“Pay-As-You Drive” is another) in which privacy and environmental interests might on the surface appear to bump heads. The good news is this “conflict” is unnecessary, and easily avoided.

The only real interest “clash” will be between those that want to protect privacy and the right to control one’s own data versus those that seek to profit off or benefit from accessing, buying and selling it.

The fact is that smart and effective environmental policy does not, and should not, conflict with the individual’s right to privacy. It is paramount then that our state’s transition to a smart grid system addresses the potential privacy pitfalls while we are in the early stages of its implementation.

Rapid technological advancement – without the requisite regulatory safeguards – poses a significant threat to the individual’s right to privacy. This threat is epitomized by the “Smart Grid”.

We must embrace a thorough, thoughtful and deliberative public policy process that must include ironclad privacy protections that above all else gives the individual absolute control over, and ownership of his/her data.

Establishing tough consumer privacy protections won’t hamper the implementation of a smart grid system. In fact, it will increase its chances of acceptance and success by addressing the rightful privacy concerns consumers will inevitably have.

Elias Quinn, CEES, University of Colorado Law School summed up the challenge to privacy smart grid poses well:

“Here—as with all attempts at anticipating problems—the solution must involve, first and foremost, drawing attention to the potential privacy problem posed by the massive deployment of smart metering technologies and the collection of detailed information about the electricity consumption habits of millions of individuals.

From there, efforts to devise potential solutions must progress in parallel paths, the first in search of a regulatory fix, the second a technological one. The first protects against the systematic misuse of collected information by utilities, despite new pressures on their profitability, by ensuring the databases are used only for their principle purposes: informing efficient electricity generation, distribution, and management.

Such regulatory fixes are not difficult. In the final analysis, the privacy problem posed by smart metering is only a difficult one if the data gets unleashed before consequences are fully considered, or ignored once unfortunate consequences are realized. But to ignore the potential for privacy invasion embodied by the collection of this information is an invitation to tragedy.”

If interested in keeping track of how this issue progresses, particularly what transpires at the upcoming PUC hearings on smart grid and privacy, regularly check out my Privacy Revolt blog.

Posted via email from Ippei’s @CloudNewsCenter info database

[#Cloud クラウド速報] 1150万ユーザを抱えるオンラインゲーム、World Of WarcraftをサポートするBlizzard社の事情=>

November 29, 2009
AT&Tがバックボーンインフラを提供するこのゲーミング環境は、世界中に点在する10箇所のデータセンタによって支えられる。  次のようなスペックが公表されている。
  • 2万台のサーバで1.3ペタバイトのストレージを使用
  • 内World Of Warcraft向けには、13,250台のサーバブレード、75000台のCPUコア、112.5テラバイトのRAMを割り当て、世界中に点在する、1150万人のユーザをサポート
  • このシステムと世界中に存在するネットワークを68人のスタッフでサポート


WoW’s Back End: 10 Data Centers, 75,000 Cores

November 25th, 2009 : Rich Miller


It takes a lot of resources to host the world’s largest online games. One of the largest players in this niche is Blizzard, which operates World of Warcraft and the gaming service for its Starcraft and Diablo titles. World of Warcraft (WoW) is played by more than 11.5 million users across three continents, requiring both scale and geographic scope.

Blizzard hosts its gaming infrastructure with AT&T, which provides data center space, network monitoring and management. AT&T, which has been supporting Blizzard for nine years, doesn’t provide a lot of details on Blizzard’s infrastructure. But Blizzard’s Allen Brack and Frank Pearce provided some details at the recent Game Developer’s Conference in Austin. Here are some data points:

  • Blizzard Online Network Services run in 10 data centers around the world, including facilities in Washington, California, Texas, Massachusetts, France, Germany, Sweden, South Korea, China, and Taiwan.
  • Blizzard uses 20,000 systems and 1.3 petabytes of storage to power its gaming operations.
  • WoW’s infrastructure includes 13,250 server blades, 75,000 CPU cores, and 112.5 terabytes of blade RAM.
  • The Blizzard network is managed by a staff of 68 people.
  • The company’s gaming infrastructure is monitored from a global network operating center (GNOC), which like many NOCs, features televisions tuned to the weather stations to track potential uptime threats across its data center footprint.

The AT&T Gaming Core Team was formed in 2004 to host gaming operations using AT&T’s IP network. The team consists of engineers and hosting specialists who provide round-the-clock support to companies offering MMO games.

For more on the specialized niche for game hosting, see Virtual Goods and the Cost of Infrastructure, Second Life and the Scalability of Online Games and Engineering Everquest.

For additional information on Blizzard’s recent discussions of its infrastructure, see Gamespot, Gamasutra and ComputerWorld.


Posted via email from Ippei’s @CloudNewsCenter info database