Microsoft has announced that it will give security researchers cash rewards for devising novel software exploitation techniques, creating new exploit mitigation systems, and finding bugs in the beta of Internet Explorer 11 when it's released later this month.Bug bounty programs, where security researchers receive a cash reward from software vendors for discl...
Microsoft will pay up to $100K for new Windows exploit techniques
Microsoft has announced that it will give security researchers cash rewards for devising novel software exploitation techniques, creating new exploit mitigation systems, and finding bugs in the beta of Internet Explorer 11 when it's released later this month.
Bug bounty programs, where security researchers receive a cash reward from software vendors for disclosing exploitable flaws in those vendors' software, have become an important part of the computer security landscape. Finding flaws and working out ways to exploit them can be a difficult and time-consuming process. Moreover, exploitable flaws have a market value, especially to criminals, as they can be used to propagate malware and attack systems.
Bounty programs address both concerns. They provide a means for compensating researchers for their efforts, and they provide a market for flaws that won't lead to compromised machines and harm to third parties. Google, Mozilla, Facebook, PayPal, and AT&T, among others, all offer monetary rewards for bug disclosures.
Until now, Microsoft has shied away from such programs. No longer. The company has announced three separate schemes. One of them is a straightforward bug bounty. When Internet Explorer 11 beta is released on June 26 (as part of the Windows 8.1 beta), Microsoft will pay up to $11,000 (and possibly even more) for any critical vulnerabilities discovered by July 26.
This is a program that's broadly comparable to schemes from Google and Mozilla for their browsers. The major difference is the time constraint. Explaining the limited window for submissions, Microsoft says that it wants to ensure that most critical bugs are reported during the beta (when usage of the software and hence the risk due to flaws is low) rather than after release.
During Internet Explorer 10's development, for example, there were low numbers of critical flaws reported during the beta, a large spike shortly after release, and then more low numbers. Microsoft wants to move that spike into the beta period, and the limited payout window could encourage researchers to look at the software sooner rather than later.
The company also argues that existing third-party bounty schemes don't really address products in their pre-release state. Tipping Point's Zero Day Initiative, for example, offers a way for researchers to be rewarded for disclosing flaws, but only for products that are widely deployed. Paying for bugs during the beta fills this gap.
The other two schemes are more unusual. Microsoft is not providing rewards for security flaws per se. Rather, there are two related programs. The company is offering up to $100,000 for any attack that bypasses Windows 8.1's anti-exploitation mechanisms. In tandem with this, the company is offering $50,000 for any useful defensive technique that would guard against this exploit.
This pair of programs will start on June 26, but unlike the Internet Explorer 11 program, these two will be ongoing, with no fixed end date.
With these two schemes, Microsoft is doing something a little different from the traditional bug bounty. By focusing on exploit mitigation techniques, the company can learn about both individual problems in specific applications and system-wide issues. Addressing these system-wide issues can shore up the platform by making it harder to exploit flaws in all software on the platform, whether it's written by Microsoft or third parties.
Three-dimensional films and TVs may seem cutting-edge, but existing technologies all require optical tricks to create the illusion of depth (in some cases, very old tricks). The only truly 3D display technology we have, holography, has primarily been limited to displaying static images. That situation has slowly begun to change, but the existing technology i...
New tech may let current graphics cards drive a $500 holographic display
Three-dimensional films and TVs may seem cutting-edge, but existing technologies all require optical tricks to create the illusion of depth (in some cases, very old tricks). The only truly 3D display technology we have, holography, has primarily been limited to displaying static images. That situation has slowly begun to change, but the existing technology is complicated and expensive, and it suffers from a slow refresh rate.
Now, some researchers have come up with a completely different method of creating the light pattern necessary to build a holographic image. The functional units in their device can be manufactured for pennies: the researchers suspect they could build a large holographic display for as little as $500, one that could potentially be driven by a commodity PC sporting a suite of high-end graphics cards.
The key to building a hologram is the ability of photons to interfere with each other, creating patterns where some regions have constructive interference and become bright, while others experience destructive interference and go dark. A carefully crafted diffractive can bend and redirect light so that this interference pattern recreates patterns of light that look as if they just reflected off the surface of a three-dimensional object. Most importantly, this 3D appearance is retained even as the viewer's perspective shifts around the surface.
We have very mature technology that allows us to create a static surface that consistently displays a single image. But to get that image to move or to replace it with a different image entirely involves wholesale modification to the hardware that is creating the diffraction pattern. Even assuming that you can calculate what the new configuration of the hardware must be (something that's not especially easy), you would then have to reconfigure the hardware and rescan light over it. The existing examples of hardware that can do this have some serious issues.
Most of them involve liquid crystal, micro-mechanical hardware that physically alters its configuration. The authors of the new paper provide a laundry list of this technology's limitations: "relatively low bandwidth, high cost, low diffraction angle, poor scalability, and the presence of quantization noise, unwanted diffractive orders, and zero-order light." (The latter factors create visual artifacts in the display image.)
The device the researchers have created instead involves an array of devices called anisotropic leaky-mode couplers. These devices act as waveguides for light while allowing the light travelling through them to be manipulated. When exposed to radio-frequency radiation, the hardware will form acoustic waves that alter the light travelling through them. This allows each coupler to rapidly alter the timing and direction of the light it emits in response to changes in the radio waves. By placing a number of them in close proximity, it's possible to get the light they emit to interfere (creating a hologram) and then change this hologram simply by altering the radio frequencies.
Other good features of the couplers are that they can be made to emit light with a single polarization, allowing a simple filter to cut out any imaging artifacts. And they can work with red, green, and blue light simultaneously, allowing a true-color hologram. This can also work with just about any light source.
You don't need many of these couplers to build a significant device; the authors estimate that about 500 of them would be all that you'd need to build a single horizontal line of a one-meter wide display. In the paper, they showed a device with 40 channels, and they're already testing one with 1,250 channels. The devices also easy and cheap to make. Their 40-channel hardware cost $50 to make at MIT's custom fabrication facility, but the author's estimate that an equivalent could be made at a commercial fabrication facility for somewhere in the area of $3.
As further progress has been made in getting graphics cards to generate holographic information and the radio frequency control signals are compatible with analog video displays, the authors think that a holographic display could probably be driven by a commodity PC with a bank of high-end graphics cards. The graphics cards would end up being the biggest expense in the hardware, so the actual array of couplers would only cost about $500 to make. Any reasonably bright color LED could provide the light source in this case. The end result would be a full-color hologram at standard video resolution with a refresh rate of about 30 fps.
Remember what happened to the HTC First? The Android handset was a solid device, but it didn't sell well despite (or because of?) its inclusion of Facebook Home right out of the box. It looks like Samsung has been paying attention because, as the Korea Herald reports, the company has turned down the...
Sources say Samsung may have declined to produce the next Facebook phone
Remember what happened to the HTC First? The Android handset was a solid device, but it didn't sell well despite (or because of?) its inclusion of Facebook Home right out of the box. It looks like Samsung has been paying attention because, as the Korea Herald reports, the company has turned down the opportunity to launch a phone with a Facebook interface.
According to Herald sources, Mark Zuckerberg visited Seoul to meet with Samsung executives yesterday and to discuss the possibility of another Facebook-type phone. But Samsung is reluctant to jump into that boat, considering the aforementioned struggles of the HTC First. “Samsung doesn’t want to help nurture a second Google,” claims one source. “[Google] is now becoming a formidable rival for Samsung in the handset business.” Additionally, the sources claimed that Facebook does not offer the premium image that Samsung wants to exude, and a partnership really wouldn't provide much benefit.
Facebook Home is already available for both of its major flagship handsets, the Galaxy S 4 and last-generation Galaxy S III, so there’s not much incentive for Samsung to even consider that kind of out-of-the-box branding.
Recent revelations of the National Security Agency's (NSA) data mining capabilities have come to the forefront recently, making "big data" a new subject of interest and concern for many people.So what better time than now to launch a data analytics tool based on the very technology that the NSA uses...
Secret Sqrrl: NSA “spin-off” company releases data mining tool
Recent revelations of the National Security Agency's (NSA) data mining capabilities have come to the forefront recently, making "big data" a new subject of interest and concern for many people.
So what better time than now to launch a data analytics tool based on the very technology that the NSA uses to perform its real-time analysis of massive amounts of data being pulled in from sources like the PRISM program?
The timing of the launch may not have been ideal for startup Sqrrl Data, which announced version 1.1 of its flagship product Sqrrl Enterprise today. Sqrrl is a commercially extended version of Apache Accumulo, the big data analysis platform originally developed by the NSA for real-time data mining, with built-in protections designed to hide certain kinds of information from people without the clearance to view it. (Last week, Ars took an in-depth look at Accumulo and other tools that the NSA uses to tap into the firehose of data that it has access to.)
The news tie-in has certainly put a crimp into Sqrrl's ability to get customer testimonials. While the company has a handful of existing customers in government, finance, healthcare, and other industries, "Because of everything that's going on with NSA, our early customers are not particularly excited about talking," said Ely Kahn, Sqrrl's vice president of business development, in an interview with Ars
The NSA created Accumulo in 2008 as a tool to handle the massive amounts of data it collects through its surveillance operations. Sqrrl Enterprise takes Accumulo and the Hadoop Distributed File System (HDFS) as a starting point. "We're essentially creating a premium grade version of Accumulo with additional analysis, data ingest, and security features, and taking a lot of the lessons learned from working in large environments," said Kahn.
The company has plenty of direct experience with large environments: it's mostly made up of members of the team who developed Accumulo at the NSA. Of the seven members of the founding team, six came from the NSA; Kahn, the only outlier, is the former White House Director of Cybersecurity.
Last year, the team moved from Washington, DC to Cambridge, MA to be closer to its investors. Usually for these sorts of projects, Washington, DC-based venture capital company called In-Q-Tel is a primary investor due to its funding from the CIA and the US Intelligence Community. But despite the company's roots at the NSA, In-Q-Tel is not among Sqrrl Data's investors at the moment. "We talk to InQTel a lot," Kahn said. "They're good for companies trying to break into government, but not so much for companies trying to break out of government."
Sqrrl adds a few things atop Accumulo's inherent features to make them fit more easily into organizations without the inside IT muscle of the NSA. Those features include "iterators"—software components that constantly query data being pushed into Accumulo's distributed data store. Like the MapReduce technology used by Google, iterators can be launched en masse against data to ferret out acorns of information. Like the squirrels in Willy Wonka's factory, the iterators keep at it, constantly sorting the nuts.
Sqrrl includes a set of pre-built custom iterators "for real-time analytics," said Kahn, "and for what people refer to as a 'multi-analytic environment.'" They allow for things like real-time full text searches, graph analysis for identifying "entities" within data and the relationships between them, SQL-like queries, and statistical analysis.
One of the natural ways to use this sort of product is with what's referred to as Security Information Event Management (SIEM)—the real-time monitoring of log data, alarms, and other information generated by sensors of all types to look for the signature of a security threat. SIEM systems can be hooked to network monitoring systems such as those used by the NSA, but also to data streams from nearly anything that is networkable—point-of-sale systems, electronically keyed doors, and anything else that falls under the umbrella of the "Internet of Things."
The main characteristic that differentiates Accumulo (and thus Sqrrl Enterprise) from other "big data" platforms is its security. The platform allows for compartmentalization of segments of big data storage through an approach called cell-level security. The security level of each cell within an Accumulo table can be set independently, hiding it from users who don't have a need to know: whole sections of data tables can be hidden from view in such a way that users (and applications) without clearance would never know they weren't there. Sqrrl builds atop that security, providing ways for enterprises to plug in their existing user authentication and directory services systems to govern who can get access to data.
That level of security is also the basis of the NSA's contention that the data it collects is contained in a "lock-box"—at least in theory, the system could be configured to only allow certain elements of collected data to be viewed based on filter-based rules, preventing unauthorized fishing expeditions with wide-ranging queries. "We've learned recently that a number of other governments are using Accumulo—other governments in North America and Europe," Kahn said. "And that's largely because of the security controls."
In addition to the "half-dozen or so" paying customers Sqrrl has already landed, "there are a dozen additional unpaid proof of concepts out there with systems integrators and other big data providers who are testing out our software," Kahn said. With any luck, they could result in systems that help secure health data and financial data better while allowing big data analysis to help improve care and spot fraud.
On Wednesday, the 3D printing industry saw one of its largest financial deals to date: Stratasys, a large, publicly-traded 3D printing and rapid prototyping company, acquired the well-known MakerBot for $403 million in stock “with an additional $201 million in performance-based earn-outs.”Stratasys’...
Old-school 3D printing firm buys a hot little startup, MakerBot, for $403M
On Wednesday, the 3D printing industry saw one of its largest financial deals to date: Stratasys, a large, publicly-traded 3D printing and rapid prototyping company, acquired the well-known MakerBot for $403 million in stock “with an additional $201 million in performance-based earn-outs.”
Stratasys’ stock price is up slightly (around three percent) on the news in after-hours trading.
The Brooklyn-based MakerBot is only four years old as a company, and it has been targeting the higher-end “prosumer” market. The company sells its Replicator 2 Desktop 3D Printer for $2,200. It also has a 3D scanner on the way. Stratasys is a well-established company that has been around since 1989 and its roots are in industrial printing and prototyping.
MakerBot has been a darling of the tech industry. CEO Bre Pettis made the cover of Wired magazine in September 2012 and received $10 million in venture capital from Amazon CEO Jeff Bezos’ investment group, Facebook executive Sam Lessin, and others. Based on its own unaudited financial reporting, MakerBot says it took in $11.5 million in total revenue for the first quarter of 2013 compared to $15.7 million for all of 2012.
Stratasys has a market capitalization of $3.33 billion. With today's news, the company will also acquire MakerBot’s Thingiverse.com, which it describes as “the largest collection of downloadable digital designs for making physical objects, and which is empowered by a growing community of makers and creators.”
"MakerBot's 3D printers are rapidly being adopted by CAD-trained designers and engineers," said David Reis, Stratasys CEO, in a statement. "Bre Pettis and his team at MakerBot have built the strongest brand in the desktop 3D printer category by delivering an exceptional user experience. MakerBot has impressive products, and we believe that the company's strategy of making 3D printing accessible and affordable will continue to drive adoption. I am looking forward to working with Bre.”
Citing "people familiar with the matter," the Wall Street Journal is reporting that Microsoft was in advanced talks to buy Finnish smartphone manufacturer Nokia. However, a sale is now unlikely. The newspaper reports that talks faltered and "aren't likely to be revived."The WSJ claims that the two p...
WSJ: Microsoft’s plans to buy Nokia faltered, not likely to be revived
Citing "people familiar with the matter," the Wall Street Journal is reporting that Microsoft was in advanced talks to buy Finnish smartphone manufacturer Nokia. However, a sale is now unlikely. The newspaper reports that talks faltered and "aren't likely to be revived."
The WSJ claims that the two parties were close to an oral agreement but that Microsoft walked away from the deal after disagreements over the asking price. Additionally, Nokia's market position—trailing both Apple and Samsung as it is—was a sticking point. Nokia is, nonetheless, the biggest manufacturer by far of Windows Phone handsets and appears to be slowly increasing its market share.
The deal may have potentially been sweetened by the fact that Microsoft could have used offshore cash to make the purchase. This would've allowed the company to avoid the tax liability incurred by moving the money to the US. Microsoft did this for its 2011 Skype purchase.
YET ANOTHER UPDATE (5:24 Eastern): Microsoft has confirmed to Kotaku that the "family sharing" and digital cloud library access features that were planned to be in the Xbox One are indeed gone thanks to today's policy reversal. Xbox one users will also apparently have to download a "Day One" patch t...
Microsoft reverses controversial game licensing policies [Updated]
YET ANOTHER UPDATE (5:24 Eastern): Microsoft has confirmed to Kotaku that the "family sharing" and digital cloud library access features that were planned to be in the Xbox One are indeed gone thanks to today's policy reversal. Xbox one users will also apparently have to download a "Day One" patch to enable the offline mode.
"You can play, share, lend, and resell your games exactly as you do today on Xbox 360." That is now the official word from Microsoft.
Microsoft says it "imagined a new set of benefits such as easier roaming, family sharing, and new ways to try and buy games," but that it also realized that "the ability to lend, share, and resell these games at your discretion is of incredible importance to you."
No Internet connection will be required to play offline Xbox One games; the Internet will only be required for a one-time initial system setup. There will be no limitations on sharing or selling game discs. Downloaded games will be playable offline, and there will be no regional restrictions on those games.
On the downside, there will be no digital "family" sharing as was previously announced, and disc-based games will require the disc to be in the tray to be played.
Here is the full text on this change directly from Microsoft:
Last week at E3, the excitement, creativity and future of our industry was on display for a global audience. For us, the future comes in the form of Xbox One, a system designed to be the best place to play games this year and for many years to come. As is our heritage with Xbox, we designed a system that could take full advantage of advances in technology in order to deliver a breakthrough in game play and entertainment. We imagined a new set of benefits such as easier roaming, family sharing, and new ways to try and buy games. We believe in the benefits of a connected, digital future. Since unveiling our plans for Xbox One, my team and I have heard directly from many of you, read your comments and listened to your feedback. I would like to take the opportunity today to thank you for your assistance in helping us to reshape the future of Xbox One. You told us how much you loved the flexibility you have today with games delivered on disc. The ability to lend, share, and resell these games at your discretion is of incredible importance to you. Also important to you is the freedom to play offline, for any length of time, anywhere in the world. So, today I am announcing the following changes to Xbox One and how you can play, share, lend, and resell your games exactly as you do today on Xbox 360. Here is what that means: An internet connection will not be required to play offline Xbox One games – After a one-time system set-up with a new Xbox One, you can play any disc based game without ever connecting online again. There is no 24 hour connection requirement and you can take your Xbox One anywhere you want and play your games, just like on Xbox 360. Trade-in, lend, resell, gift, and rent disc based games just like you do today – There will be no limitations to using and sharing games, it will work just as it does today on Xbox 360. In addition to buying a disc from a retailer, you can also download games from Xbox Live on day of release. If you choose to download your games, you will be able to play them offline just like you do today. Xbox One games will be playable on any Xbox One console — there will be no regional restrictions. These changes will impact some of the scenarios we previously announced for Xbox One. The sharing of games will work as it does today, you will simply share the disc. Downloaded titles cannot be shared or resold. Also, similar to today, playing disc based games will require that the disc be in the tray. We appreciate your passion, support and willingness to challenge the assumptions of digital licensing and connectivity. While we believe that the majority of people will play games online and access the cloud for both games and entertainment, we will give consumers the choice of both physical and digital content. We have listened and we have heard loud and clear from your feedback that you want the best of both worlds. Thank you again for your candid feedback. Our team remains committed to listening, taking feedback and delivering a great product for you later this year.
UPDATE: Microsoft says it has changed certain policies for the Xbox One "as a result of feedback from the Xbox community." A link to the "latest" details on the move currently points to a broken page, though. More to come.
Take this with a grain of salt for now, but Giant Bomb news writer Patrick Klepek is reporting that "multiple sources" are telling him that Microsoft will be announcing a complete reversal of its controversial Xbox One game licensing and online policies later today.
According to the report (which is currently killing Giant Bomb's servers), this means the Xbox One will no longer have to check in regularly online but will instead only require an Internet connection during the initial system setup. Game discs will be just as portable as they were on the Xbox 360, with no restrictions on resale or transfer, and downloadable games will work offline as well as online. Region locks on the system will also reportedly be dropped.
A separate report from WhatTheHiFi confirms the essential facts of the Giant Bomb report through its own unnamed sources, adding that developers are being informed of the change before consumers hear about it officially later today.
While neither site gives any more details on where this information is coming from, Klepek says that his sources tell him that Microsoft has definitely been listening closely to consumer feedback in the weeks since first revealing its information.
We're reaching out to our own sources on this and will of course let you know when anything official comes out.
We haven’t heard much from Motorola this year beyond the rumors surrounding its forthcoming “superphone,” the Moto X. The company has yet to announce anything beyond a mention of the phone at the All Things Digital D11 conference. If you’re going by the amount of rumors that have circulated, however, you’d think that Motorola al...
We haven’t heard much from Motorola this year beyond the rumors surrounding its forthcoming “superphone,” the Moto X. The company has yet to announce anything beyond a mention of the phone at the All Things Digital D11 conference. If you’re going by the amount of rumors that have circulated, however, you’d think that Motorola already has a hit on its hands.
Late last year, we learned that Motorola was going to be working alongside Google to manufacture a new Android smartphone. We didn't know much about it other than the fact that Google was investing in a team and the technology to do something different. Because of the search giant’s involvement, that led many to believe great things were ahead.
As mentioned, Dennis Woodside, CEO of Motorola, walked onto the D11 conference stage back in May with the purported “superphone” in his pocket and refused to take it out. Woodside made great declarations that the phone was real and that we’d be seeing it in October.
Things aren't rosy for Motorola lately. After numerous flops, sales haven't been what they should be. But the company has some hope now that it’s in Google’s hands. "I sat down with [Google CEO] Larry Page about what we are going to do," Woodside said. "We will take it back to the roots of innovation and build devices that have the potential to change people's lives."
So what is this phone that Motorola says will change our lives? Let's ask the rumor mill.
Back in December, the Wall Street Journal reported that the X phone was in the works. At the time, speculation pointed to Motorola beefing up the software and focusing on putting more refined features into the X phone, like a better camera and a better screen. Motorola officially acquired a company called Viewdle, which focused on imaging and gesture-recognition, and allegedly experimented with bendable screens and a ceramic chassis so that the handset would be more durable and stress resistant. This idea seemed to be reinforced by rumors that the phone would have a 4.8-inch OLED sapphire screen, which is apparently three times harder to crack than a display made of Gorilla Glass.
Next came talk of the future phone's performance. The Moto X, previously referred to as the "X phone," would include a quad-core Qualcomm Snapdragon 800 SoC with a 4,000 mAh battery pack and a water-proof chassis. A recent FCC filing showed that the model number was XT1058, which many believed was the “XFon,” but there were no actual specifications. However, this was all tied to an FCC listing showing that the phone featured Bluetooth 4.0, NFC, and 802.11ac, and that it supported AT&T’s LTE bands. Benchmarks later surfaced that detailed a dual-card Snapdragon S4 Pro processor with 1.7GHz and a 720p display—specifications that didn't match up with preliminary talk.
Flash forward to this week. The rumors now say that Motorola filed again with the FCC, this time for a phone that’s being distributed through US Cellular (the nation’s fifth largest carrier). The filing was for model number XT1055, which differs from the previous FCC filing. Many are now suggesting that we’re actually in store for a mid-range phone.
Woodside made comments at the D11 conference about Motorola relaunching its entire product lineup, so it’s possible that the different FCC filings are an array of handsets that will replace the ones that are currently available. We may still see a “superphone” in the end, followed by a family of mid-range phones.
On stage at D11, Woodside detailed some of the Moto X's purported features, including the use of low-power sensors that won’t drain the phone’s battery. The phone will potentially use a pair of processors that enable “always-on” sensors to remain contextually aware of what’s going on around its user (it’s unclear if those processors will be manufactured by Qualcomm. Again, these are rumors—anything is possible). With this supposed function, the phone will apparently react differently to various use cases. If you’re driving, for instance, it won’t allow you to bring up the text messaging application. Or if you need to snap a photo, it will immediately launch the camera application as soon as you pull it out of your pocket.
The Moto X smartphone will be built in a new facility outside of Fort Worth, Texas. The building was originally designed and built by Nokia in the 1990s to manufacture feature phones. Since the plant has this history of manufacturing phones, it will be a simple transition for Motorola to set up shop.
The Moto X will be the first company smartphone to be made in the US, though it’s merely being assembled stateside because its components will still come from international suppliers. "It's obviously our major market, so for us, having manufacturing here gets us much closer to our key customers and partners as well as our end users," Motorola spokesperson Will Moss told the AP. "It makes for much leaner, more efficient operations."
TechCrunch reported a while back that sketches of a new Motorola handset had surfaced. It’s unclear if those images were related to the Moto X. These sketches did however match several leaks from a few months earlier, ones that claimed the back of the phone would use a logo that doubled as a touch-sensitive button for launching commands on-screen. That's certainly a possibility considering that the patent Google filed for earlier this year focused on backside controls.
Could this be the phone that Woodside had in his pocket? Unfortunately, we won’t find out until its alleged release later this year. For now, we'll keep our eyes on the rumor mill for entertainment—and occasional nuggets of potential information.
Kim Dotcom today said on Twitter that Megaupload user data in Europe has been "irreversibly lost" because it was deleted by a Netherlands-based server hosting company called LeaseWeb."VERY BAD NEWS: #Leaseweb has wiped ALL #Megaupload servers. All user data & crucial evidence for our defense destroyed 'without warning,'" Dotcom ...
Kim Dotcom: Megaupload data in Europe wiped out by hosting company
Kim Dotcom today said on Twitter that Megaupload user data in Europe has been "irreversibly lost" because it was deleted by a Netherlands-based server hosting company called LeaseWeb.
"VERY BAD NEWS: #Leaseweb has wiped ALL #Megaupload servers. All user data & crucial evidence for our defense destroyed 'without warning,'" Dotcom tweeted.
Dotcom said LeaseWeb informed his team today that Megaupload servers were deleted on Feb. 1, 2013. "Our lawyers have repeatedly asked #Leaseweb not to delete #Megaupload servers while court proceedings are pending in the US," Dotcom tweeted. "We asked the DOJ to release some of #Megaupload's frozen assets to buy ALL servers. They refused." The lost data includes "[m]illions of personal #Megaupload files, petabytes of pictures, backups, personal & business property," Dotcom wrote.
LeaseWeb said it deleted the data because Dotcom didn't pay his bills and didn't respond to messages. The company explained in a blog post:
When MegaUpload was taken offline, 60 servers owned by MegaUpload were directly confiscated by the FIOD [a Dutch anti-fraud agency] and transported to the US. Next to that, MegaUpload still had 630 rented dedicated servers with LeaseWeb. For clarity, these servers were not owned by MegaUpload, they were owned by LeaseWeb. For over a year these servers were being stored and preserved by LeaseWeb, at its own costs. So for over one whole year LeaseWeb kept 630 servers available, without any request to do so and without any compensation… LeaseWeb has 60,000 servers under its management and more than 15,000 clients worldwide. The storage of the 630 servers—while a relatively small burden—must serve a purpose. During the year we stored the servers and the data, we received no request for access nor any request to retain the data. After a year of nobody showing any interest in the servers and data we considered our options. We did inform MegaUpload about our decision to re-provision the servers. As no response was received, we commenced the re-provisioning of the servers in February 2013. To minimize security risks and maximize the privacy of our clients, it is a standard procedure at LeaseWeb to completely clean servers before they are offered to any new customer.
Dotcom offers a different account, saying his legal team and the Electronic Frontier Foundation "have written several data preservation demands to #Leaseweb. We were never warned about the deletion." LeaseWeb "could have waited for the US court to decide on #Megaupload user data," he further wrote. "They knew of our desire to pay if the court released funds."
This appears to affect only European user data, as Megaupload data related to US customers is being stored pending the outcome of the criminal copyright infringement case filed against Dotcom and Megaupload. Some people who used Megaupload before it was shut down by the US government are trying to get their data back, but they have so far been rebuffed by US officials.
Dotcom wrote that Megaupload's other hosting firms, Carpathia and Cogent, "are still supporting us." Carpathia is storing Megaupload servers in a warehouse to protect the data, he wrote.
Researchers have discovered how one of the world’s oddest mammals developed resistance to cancer, and there is hope that their work could help fight the disease in humans.Naked mole rats live underground, where environmental conditions are harsh but predators are few. They can live for more than 30 years, almost twenty-seven years longer than their close cou...
Cancer immunity of strange underground rat revealed
Researchers have discovered how one of the world’s oddest mammals developed resistance to cancer, and there is hope that their work could help fight the disease in humans.
Naked mole rats live underground, where environmental conditions are harsh but predators are few. They can live for more than 30 years, almost twenty-seven years longer than their close cousin the house mouse—which is particularly susceptible to cancer. They breathe slowly due to the limited supply of oxygen, survive on very little food, have poor sight, and are largely indifferent to pain.
Naked mole rats are also the only mammals that do not regulate their body temperature. Because they live in colonies where the queen rat does the job of producing progeny and only a few males father the litters, their sperms become lazy.
For cancer researchers, mice and naked mole rats fall on two extremes of the disease spectrum. Mice are used as animal models of disease because of their short lives and high incidence of cancer, which help researchers study the mechanism of cancer occurrence and test drugs that fight the disease.
Naked mole rats, on the other hand, have never developed cancer in the years that they have been studied. In labs, researchers often don’t wait for their animal models to develop cancer. Instead they induce cancer by blasting the animals with gamma radiation, transplanting tumors, or injecting cancer-causing agents. Do that to a naked mole rat, though, and nothing happens.
Now, Vera Gorbunova and Andrei Seluanov at the University of Rochester think they may have found one mechanism by which naked mole rats defend themselves against cancer. Their results, reported today in the journal Nature, make for a strange tale.
While studying cells taken from the armpits and lungs of naked mole rats, they found an unusually thick chemical surrounding the cells. This turned out to be hyaluronan, a substance that is present in all animals, where its main job is to hold cells together. Beyond providing mechanical strength, it is also involved in controlling when cells grow in number.
Cancer relies on the unregulated growth of cells, so hyaluronan was thought to be involved in the progression of malignant tumors. According to Gorbunova, aspects of hyaluronan may regulate cell growth as well as hyaluronan's amount and thickness. As a polymer, the greater the number of hyaluronan molecules in a single chain, the thicker it becomes.
When the molecular mass is high, cells are “told” to stop increasing in number. When the molecular mass is low, they are “asked” to proliferate. In the case of the naked mole rat, Gorbunova found that the molecular mass was unusually high, as much as five times that of mice or humans.
To understand whether this unusual hyaluroanan was responsible for cancer resistance in naked mole rats, Gorbunova increased the amount of enzyme that degrades the chemical, reducing its molecular weight. Soon after, she observed that the rat’s cells readily started growing in thick clusters like cancerous mouse cells do.
In a separate experiment, she also tested this hypothesis by reducing the amount of hyaluronan by knocking out the genes that encode for its production. Then, on injecting cancer-causing virus, instead of resisting, the naked mole rat’s cells became cancerous.
Gorbunova thinks that having thick hyaluronan might have helped increase the elasticity of the rat’s skin, allowing it to live in small tunnels underground. This trait might then have accidentally developed a new role of preventing cancer.
Rochelle Buffenstein, a physiologist at the University of Texas Health Science Center, has studied naked mole rats for years and was pleased to see that some light has been shed on this creature’s remarkable resistance to cancer. “As we learn more about these cancer-resistant mechanisms that are effective and can be directly pertinent to humans, we may find new cancer prevention strategies,” she said.