DigitalOcean Launches Drag-and-Drop Object Storage


Brought to you by Talkin’ Cloud

DigitalOcean is extending its developer-friendly portfolio to include object storage. Called Spaces, the object storage product is the company’s seventh new offering over the past 18 months.

Spaces, which is available starting at $5 per month for 250 GB of storage, features a simple drag-and-drop UI. It also works with many existing AWS S3 compatible tools.

According to DigitalOcean, it launched Spaces today in response to thousands of requests from its developer community. In a survey of developers, also released today, DigitalOcean found that 45 percent of respondents use object storage, which means there are still a lot of developers looking for a storage service.

Developers said they consider cost effectiveness, uptime, and backup capabilities the most important factors when selecting a storage service. Object storage has seen pricing pressure over the past 12 months, according to a recent report by 451 Research.

“A lot of DigitalOcean’s historic strength has been in simplifying the experience, providing a really clean developer experience out of the box,” Redmonk analyst Stephen O’Grady told Talkin’ Cloud in an interview.

Some of the most common use-cases for Spaces include hosting web assets, images and large media files, and archiving backups in the cloud.

“One of the things you end up finding, particularly as these applications grow or you want them to do different, more sophisticated things is that you begin to have more of a need for static object storage,” O’Grady said. “That could be for media files, it could be for logs, it could be for any number of things that you end up wanting to do.”

Spaces was made available to almost 90,000 users during its early access phase. Users can sign up now for a free two-month trial.

“Spaces is the most important product we’ve released since Droplet, the first SSD-based compute instance in the market,” DigitalOcean CEO Ben Uretsky said in a statement. “DigitalOcean is becoming the developer’s platform, providing storage, compute and networking capabilities to scale applications of any size. Despite the technical complexity of launching a product like this, we’ve worked incredibly hard to ensure Spaces maintains the same ease-of-use and effortless UI as our other products. We wanted to simplify the way developers can innovate so they can spend time building great software.”

In April, DigitalOcean launched a free server monitoring service for developers to gain insight into resource utilization and operational health of their Droplets, and earlier in the year the company launched its Load Balancers product.

Tech Firms Face Fines Unless Terrorist Material Removed in Hours


(Bloomberg) — European leaders will warn the world’s biggest technology companies that they face fines unless they meet a target of removing terrorist content from the internet within two hours of it appearing.

At a meeting in New York on the sidelines of the United Nations annual meeting, U.K. Prime Minister Theresa May, French President Emmanuel Macron and Italian Prime Minister Paolo Gentiloni will address executives from companies including Facebook Inc., Alphabet Inc.’s Google, Microsoft Corp. and Twitter Inc.

Their goal is to persuade these tech giants that stopping terrorists from using their platforms should be a priority and the focus for innovation. May’s office pointed to Twitter’s success in this area. The company said Tuesday that automated tools had helped it to suspend nearly 300,000 accounts linked to terrorism so far this year.

“Terrorist groups are aware that links to their propaganda are being removed more quickly, and are placing a greater emphasis on disseminating content at speed in order to stay ahead,” May will tell the meeting, according to her office. “Industry needs to go further and faster in automating the detection and removal of terrorist content online, and developing technological solutions which prevent it being uploaded in the first place.”

Most of the material that Islamic State puts online is aimed at radicalizing people and encouraging them to carry out attacks at home. Britain has seen four such attacks this year, from the unsophisticated Westminster and London Bridge assaults where the attackers used vehicles and knives to to the more advanced bomb attack in Manchester and last week’s failed subway bomb.

Instructions to make bombs are usually hosted on smaller platforms, which often lack the tools to identify and remove content.

May’s government is looking at making internet companies legally liable if they don’t take terrorist material down quickly. The first two hours after something is put online are considered crucial, as this is when most of the material is downloaded.

Islamic State has developed sophisticated marketing techniques to spread its propaganda before it can be identified and removed. May will say she wants internet companies to identify material as it’s being uploaded and stop it appearing at all.

Ahead of the meeting, May will address the UN, talking about the effects of the terrorism that she’s seen this year in Britain.

“As prime minister, I have visited too many hospitals and seen too many innocent people murdered in my country,” she’ll say, according to her office. “And I say enough is enough. As the threat from terrorists evolves, so must our cooperation.”

THE BEGINNER’S GUIDE TO VPS HOSTING


THE BEGINNER’S GUIDE TO VPS HOSTING

VPS HOSTING

 

THE BEGINNER’S GUIDE TO VPS HOSTING GUIDE

Virtual Private Server (VPS) hosting can be intimidating, especially for first-timers. This guide will provide some basic information about this to help readers understand what this is about.

What is VPS hosting?

In VPS hosting, users are given back-end access so that they themselves will be able to allocate the resources (memory, disk space, and processing power) they need for their website.

How does VPS hosting differ from shared hosting?

Like shared hosting, several websites can be hosted in the same physical server in VPS hosting. However, the difference is that in VPS hosting, virtual compartments are set up so that these sites will not have to compete in terms of resources. Instead, these will be allocated according to the user’s preferences.

What are the differences between managed and unmanaged VPS hosting?

Web hosting is not just about resources. Other services, such as automated backups, virus scans, software updates, and performance monitoring are also equally important. As the name suggests, managed VPS hosting does these things, although the actual extent of coverage may depend on the service provider. This is a good option for those who are not just starting out, or who are not yet familiar with the technical side of managing a blog or a website. All they will need to worry about is generating content for the site because someone else will do the backend work for them. The catch is the services come at a price.

Unmanaged hosting, on the other hand, that comes with little or no additional service, because it is assumed that those who opt for this already have the technical know-how to install and manage the software that will perform the abovementioned tasks on their own. Do note that unmanaged hosting may not necessarily mean that the user is left all alone without any means of support available to them. Reliable service providers offer tech support in case their clients may experience glitches. On the positive side, unmanaged hosting comes at a cheaper cost, because there are no bells and whistles attached to this.

What are the advantages and disadvantages with VPS hosting?

In a way, VPS hosting offers the best of what shared and dedicated servers offer. You will have a lesser chance of experiencing down time or your website slowing down because your website will have its own allocated resources. At the same time, you will not have to spend as much compared to getting a dedicated server. Furthermore, this plan is also flexible, since you have the relative freedom to customize the server according to your needs.

Do note, however, that this option will require a bit of technical knowledge from the end of the users, because they will be the ones who will need to deal with the back-end of running the site, including setting up the resources that will be allocated on their respective sites.

Are there differences between cloud-based VPS hosting vs. VPS hosting on dedicated servers?

A virtual private server can take two forms. The first is an actual physical server, or a collection of servers, that has been split, with each segment forming its own micro-server. As the name suggests, cloud hosting utilizes the cloud platform, which uses multiple servers clustered together.

The primary disadvantage for the first option is that should the physical server crash or fail completely, all the VPSs within that server will go down. Furthermore, should one of the VPSs in the server get hacked, then the other ones may also be compromised. However, this is a cheaper option. Cloud storage, on the other hand, comes at a higher price, but more resources are available for users with this option. Furthermore, given that there are multiple servers within a cluster, failure of one will not necessarily mean complete loss of data, since the files can be migrated to another server.

 

3 Reasons  to Consider VPS Hosting for Your Website

Truth be told, virtual private servers are not applicable to all websites. This is especially true if you have started your website from scratch. Even so, that isn’t a valid reason to ignore what these servers can do. Considering that VPS hosting solutions vary in features, it is worth knowing  whether or not they will benefit you.

Of course, the decision of whether or not to use a VPS hosting depends on you. Sure, you can avail a VPS hosting at this very moment, but do you really need one?

Here are some good reasons why you should consider a VPS hosting for your website.

1. Your website is growing.

Over time, your website will grow. From 10 or 20 visitors in a day, you’ll soon have thousands of them. And no matter how much effort you exert optimizing your content, you’ll feel your site is still not doing any better. When that time comes, you will have to consider migrating your shared hosting to a VPS.

2. You feel the need to control some aspects of your hosting plan.

With a shared hosting, you will have less control over things, particularly with regards to the operating system to use, RAM, storage, and the control panel. Unlike with a VPS, you get to manage all those aspects. Isn’t that something you should carefully think about?

3. A dedicated server doesn’t seem to suit your budget yet.

As they say, a VPS hosting is where a shared hosting and a dedicated server meets. It’s like the middle road between the two. True that a VPS hosting can accommodate those demanding and growing websites a shared hosting can’t, still, it can’t match what a dedicated server can do.

Then again, that doesn’t mean it is a poor alternative to dedicated servers. Technically, it’s just a matter of hardware stuff. While a dedicated server satisfies the needs of one client, a VPS allows several users to take advantage of the benefits of a dedicated setup with one server.

Since a dedicated server is designed to cater the needs of one customer alone, it is far more pricey than a VPS hosting. Thus, if you think you can’t afford to spend a significant amount of money on dedicated servers just yet, you might want to opt for a VPS for the meantime.

Summing Up Things

To top things off, it is safe to conclude that a VPS hosting might not be the best option for your website, but if it already calls for an upgrade due to some obvious reasons, perhaps you might want to consider making a switch. Yes, it can be a bit expensive than your current shared hosting plan, but it can offer you more accessibility and customization options other plans don’t.

Hopefully, this post has helped you clear things out. Now, would you be able to decide whether or not to use a VPS hosting?

CHOOSING A VPS HOST

Opting for a virtual private server (VPS) has become the go-to option for a lot of website owners these days, by virtue of the flexibility this offers to them. Nonetheless, not all of the hosts offer the same services and features. This guide provides tips to help you in choosing the best VPS host to meet your needs.

  1. Check server uptime

You don’t want to get a web host that experiences down time frequently. Check the uptime guarantee of the VPS host you are considering getting. Most offer a 99.99% uptime guarantee, although some offer a bit lower than that at a lower cost. Even if you are working on a tight budget, do not get one that offers less than 99.95% uptime guarantee so you can get your money’s worth.

  1. Check the type of VPS hosting offered

There are two kinds of VPS web hosting offered: managed and unmanaged.

Managed VPS hosting means that the service provider will be the one to do the back-end work for you, from resource management to other services, such as virus scanning and backing up your uploaded files. Note, however, that these add-ons come with a price.

Unmanaged VPS hosting, on the other hand, puts back-end management in the hands of the user. This means that you will be the one to set the added services yourself. Technologically-savvy website owners opt for this, given that this comes at a cheaper price compared to managed VPS hosting.

Should you choose the latter option, inquire with the service provider what kind of management tool they are using. Most use cPanel. However, others may utilize a different tool that you may not be familiar with, which can make it difficult for you to run your website.

  1. Customer support

Make sure that the VPS host you get offers 24/7 customer support so that you are guaranteed to receive help in the event of downtime, or any other problem that you may encounter with your website. The best hosting companies offer different ways of contacting them (i.e. phone, e-mail, social media, website) to quickly respond to their customers’ queries.

  1. Cost

Most users factor in cost as their primary consideration when choosing their web host. This can be detrimental in the long run. While you may end up saving a few bucks, you may end up getting sub-standard service, which can include regular down-times and poor support. You may also want to see if the web host offers a money-back guarantee so you have the option to back out should you be unhappy about the service you are getting.
Why Use Managed VPS Hosting

A Virtual private server (VPS) is a hosting environment that replicates the services produced by other types of servers – shared and dedicated.

VPS Operation

Virtual private servers are divided. Each server has its own disc space, operating system, bandwidth and RAM. A shared server that contains virtual private servers allow for dedicated space which is used by an account holder. Only the holder of that account can use the allotted virtual environment which does not affect other VPS on the shared server.

VPS Hosting

VPS hosting is perfect for testing ideas in the development stage without committing too much money upfront with hosting costs. A VPS offers excellent resources for hosting. Although you can use unmanaged VPS hosting, a managed VPS hosting solution will place the burden of the software, hardware, backups and maintenance work on someone else other than yourself. There are various types of VPS hosting packages – those in the started range and those that offer capabilities to handle tremendous flows of traffic per day and offer abundant disc space bandwidth and RAM. At Jaguar PC, we offer managed VPS hosting packages that give you high end performance and other capabilities for your websites.

 

Some of the main benefits connected with managed VPS hosting include:

Control: A virtual private server can be shutoff or restarted at any time with disturbing the other VPS on the same shared server

Privacy: No sharing between operating systems takes place. Your files are protected from access by others.

Dedicated Space and Resources: A VPS has allocated resources giving protected access to RAM, disc space and bandwidth

Customization: The user can change an application at any time based on your server needs.

 

Managed VPS Hosting

The process of managing a VPS is easy, but it does require some knowledge. A web hosting company can provide this service. Management can be done fully, semi or independently. With a fully managed solution the hosting company takes care of everything from hardware changes to maintenance work. Semi management puts some of the responsibility on the account holder (the hosting company may provide maintenance and back-ups) and independently managed hosting places the entire responsibility on the account holder.

 

Be sure to contact us a Jaguar PC to learn how we can serve your managed VPS hosting needs.

Conclusion

VPS hosting is the middle ground for clients who are looking to allocate enough resources on their website but are also concerned about their budget. If you are envisioning that your site will have a lot of content, including resource-heavy photos and videos, and encounter moderate traffic, then this might be the best option for you.

..

The U.S. Kaspersky Ban Sets an Ugly Precedent


(Bloomberg View) — Is the U.S. government’s ban on the products of Kaspersky Lab, the Moscow-headquartered global cybersecurity company founded by Russians, a reasonable precaution or brazen protectionism? It’s possible to argue either case. But whether the ban is justified is less important in the grand scheme of things than what it does to the borderless nature of the cybersecurity industry and the tech industry as a whole.

The precautionary argument is laid out persuasively in the Department of Homeland Security statement. The DHS says that “Kaspersky anti-virus products and solutions provide broad access to files and elevated privileges on the computers on which the software is installed.” That’s undeniably true. It also says the Russian government could “request or compel assistance” from Kaspersky; that, too, is true as far as it goes: The Kremlin can put any amount of pressure on any company with sizable Russian operations, and Kaspersky is one such company.

See also: Kaspersky Lab Offers Source Code to U.S. Government

Kaspersky Lab has offered to let the U.S. inspect its source code, but any such inspection could miss backdoors, and the source code could be changed afterwards. The U.S. government could test Kaspersky’s products by putting them on a “honey server” and watching if any malicious activity ensues — but what if the Russian government is saving the Kaspersky weapon for some all-important attack, the way it would save some deeply embedded mole in the U.S. intelligence community?

Of course, facilitating government spying would kill off Kaspersky as a business with some $600 million in global revenues. Would the Russian government care about that if it felt national interest would be served by weaponizing Kaspersky at some crucial geopolitical moment? Not for a minute.

I asked Costin Raiu, the director of Kaspersky’s global research and analysis team, how the company answers the charges. He replied via email:

In our industry there are mainly two types of people — those who do offensive things, breaking software, creating espionage tools, exploits, and — to the extreme — helping governments with their spy efforts. The other category consists of people who fight for users, take their side, protect them from attacks, create software that defends computers and make all sorts of trouble for spy agencies.

For 20 years, Kaspersky Lab has been fighting for users. It created one of the world’s best security software and ONLY hired people who abide to some of the highest ethical standards. Any of our experts would consider it unethical to abuse user trust in order to facilitate spying by any government. Even if, let’s say, one or two such people would somehow infiltrate the company, there are 3000+ people working in Kaspersky Lab and some of them would notice something like that.

Essentially, it looks as though the firm is asking the world to take the purity of its intentions on faith, on the strength of its reputation. Kaspersky’s antivirus products consistently score at or near the top in product comparisons, and many years of such performance should be worth something. Its denials have convinced many, judging by the fact that there was no immediate follow-up on the U.S. decision from major U.S. allies.

German Interior Minister Thomas de Maiziere said recently that his government had had “positive experience” with Kaspersky and that the U.S. move was “grounds for a new test but not at this point grounds for altering our relationship.” The Canadian government, which has an even closer intelligence sharing relationship with the U.S. than the German one, has not moved to rescind its authorization of Kaspersky products. This undermines the “reasonable precaution” argument: The U.S. is not really safe from the theoretical danger of weaponized Kaspersky products if the nations with which it shares sensitive data don’t share its concerns.

There may be another reason why some governments are hesitant to follow the U.S. lead, at least for now: Kaspersky has proved helpful in identifying threats that potentially originate in the U.S. intelligence community. One example is the suspected National Security Agency tool known as the Regin trojan, discovered by Kaspersky and the U.S. firm Symantec in 2015.

It has always been difficult to attribute malicious actions in cyberspace, and traditionally cybersecurity firms didn’t expend much effort on it, focusing instead on defeating the threats — especially those presumed to be from non-state actors such as terrorists — wherever they came from. Arguably, that’s still the more reasonable approach, but the political focus has shifted to a vision of nation-state cyberwars.

The logic of the DHS statement that a Russian company is likely to act on behalf of the Russian government suggests it is potentially more credible on U.S.-generated threats. A reasonable policy for a third-party government in such a world would be to cooperate with the broadest range of cybersecurity companies so that no threat is downplayed under pressure from the nation states in which the security firms are based. That’s potentially good for Kaspersky outside the U.S. though, in fact, it’s ugly for the cybersecurity industry; instead of the equal trust the top firms enjoy today, such a pragmatic approach would place them under equal suspicion.

Suddenly, the attribution of attacks becomes as important as repelling them. But it’s a far iffier part of the business, and a far less useful one for practical purposes. Besides, given that insiders present the biggest threat when it comes to cyber intrusions, the companies can no longer safely count on an international pool of talent, as they have done for years. Is it worth hiring this talented Russian if he could be a spy? Does this nice American kid perhaps have instructions from the NSA to insert a backdoor in a commercial antivirus product? And in general, if nation states treat cyberspace as a theater of war, shouldn’t any government or large company confine itself to national software?

That’s certainly what Russian President Vladimir Putin appears to think when he tells Russian information technology companies to start using exclusively Russian-developed software if they want government contracts. “In some spheres the state will inevitably tell you: You know, we can’t take this because someone could push a button somewhere and it’ll all switch off,” Putin said.

Perhaps the U.S. government should be equally wary, as it was with the Kaspersky ban. But for other governments, and for private business, this kind of mindset could mean missing out on threats that cause real economic and political damage today. The shape of the cybersecurity industry before the new Cold War — a pool of international intellect and skill united against any and all threats — was more conducive to fighting them off.

Heptio Raises $25M to Drive Enterprise Kubernetes Growth


Enterprise Kubernetes company Heptio announced Wednesday it has raised $25 million in Series B funding to accelerate its growth and extend its services for hybrid cloud transformation beyond the Kubernetes project.

Heptio founders Joe Beda and Craig McLuckie created Kubernetes along with Brendan Burns while they were with Google. The company provides training, professional services, and products to integrate Kubernetes and related technologies with enterprise IT and reduce the cost and complexity of running them in production environments.

See also: Cloud and Web Hosting Industry Trends in Private Equity Investment

Kubernetes, the open source container automation platform developed by Google, has become the industry’s de facto standard for orchestrating and managing containers, according to the announcement.

“Kubernetes really speaks to systems engineers, but there is a huge body of work to do to make it truly accessible to engineers who don’t necessarily have the time to ‘dig into’ the details of the project,” wrote McLuckie, Heptio CEO, in a blog post. “Upstream versions of Kubernetes remain inaccessible to many from an operations and accessibility perspective. It is still too hard to deploy a Kubernetes cluster, qualify whether it is conformant, and stitch it into the fabric of enterprise IT systems.”

Heptio will use the capital to “dramatically scale” its team and launch new products to make Kubernetes more accessible to developers and operators.

“Organizations of all sizes see open source cloud native technologies a path to not only avoid vendor lock-in, but get more out of the tech that powers their business,” McLuckie wrote. “We aspire not only to connect them with the Kubernetes community, but also to partner with them to deliver and integrate full enterprise-grade solutions for the workloads that power their businesses.”

Funding was led by Madrona Venture Group, with participation from Lightspeed Venture Partners and Accel Partners. Accel led Heptio’s $8.5 million first funding round, with Madrona participating, just 10 months ago. The new funding brings the company’s total to $33.5 million. Tim Porter of Madrona will join Heptio’s board of directors as part of the funding agreement.

Previously open source-phobic Microsoft threw its support behind Kubernetes by joining the Cloud Native Computing Foundation as a platinum member in July, and was followed weeks later by AWS, the last hold-out among hyperscale public cloud providers.

Heptio’s competitors in the open source cloud infrastructure orchestration space include Platform9, which announced it has raised $22 million in Series C funding in June, Interoute, which launched a managed container platform in March, and Cloudify, which GigaSpaces spun off into a new company in July.

Equifax Says Unpatched Apache Struts Vulnerability Behind Massive Security Breach


Brought to you by IT Pro

Equifax officials said today that its massive security breach was possible via unpatched web application server vulnerability Apache Struts CVE-2017-5638, confirming what some in the security community expected to be the case last week when the news first broke.

In an update to its FAQ page on EquifaxSecurity2017.com, officials said it has been working with an independent cybersecurity firm to determine what information was accessed and which customers have been impacted.

Equifax announced last Thursday that personal information belonging to 143 million customers was accessed by hackers, in addition to credit card numbers for about 209,000 consumers. Beyond facing its customers wrath in the days that have followed, Equifax is now also subject to an FTC investigation. 

“We know that criminals exploited a U.S. website application vulnerability. The vulnerability was Apache Struts CVE-2017-5638. We continue to work with law enforcement as part of our criminal investigation, and have shared indicators of compromise with law enforcement,” Equifax said.

Apache Struts CVE-2017-5638 was made public on March 7, 2017, and a patch was made available that day. In a statement today, Apache said “the Equifax data compromise was due to their failure to install the security updates provided in a timely manner.”

Equifax discovered the breach on July 29 and didn’t disclose when it sought to patch the flaw, Bloomberg says.

In a blog post by Contrast Security CTO Jeff Williams last week, he said that while ensuring you don’t use libraries with known vulnerabilities is a good practice, it is not easy since changes come out frequently.

“Often these changes require rewriting, retesting, and redeploying the application, which can take months. I have recently talked with several large organizations that took over four months to deal with CVE-2017-5638. Even in the best run organizations, there is often a gap of many months between vulnerability disclosure and updates being made to applications,” Williams wrote.

A statement from Apache Struts VP René Gielen on Saturday, before Equifax confirmed what caused the security breach, said:

“We as the Apache Struts PMC want to make clear that the development team puts enormous efforts in securing and hardening the software we produce, and fixing problems whenever they come to our attention. In alignment with the Apache security policies, once we get notified of a possible security issue, we privately work with the reporting entity to reproduce and fix the problem and roll out a new release hardened against the found vulnerability. We then publicly announce the problem description and how to fix it. Even if exploit code is known to us, we try to hold back this information for several weeks to give Struts Framework users as much time as possible to patch their software products before exploits will pop up in the wild. However, since vulnerability detection and exploitation has become a professional business, it is and always will be likely that attacks will occur even before we fully disclose the attack vectors, by reverse engineering the code that fixes the vulnerability in question or by scanning for yet unknown vulnerabilities.”

In the post, Gielen outlines five best practices for using open or closed source supporting library in software products and services:

1. Understand which supporting frameworks and libraries are used in your software products and in which versions. Keep track of security announcements affecting this products and versions.

2. Establish a process to quickly roll out a security fix release of your software product once supporting frameworks or libraries needs to be updated for security reasons. Best is to think in terms of hours or a few days, not weeks or months. Most breaches we become aware of are caused by failure to update software components that are known to be vulnerable for months or even years.

3. Any complex software contains flaws. Don’t build your security policy on the assumption that supporting software products are flawless, especially in terms of security vulnerabilities.

4. Establish security layers. It is good software engineering practice to have individually secured layers behind a public-facing presentation layer such as the Apache Struts framework. A breach into the presentation layer should never empower access to significant or even all back-end information resources.

5. Establish monitoring for unusual access patterns to your public Web resources. Nowadays there are a lot of open source and commercial products available to detect such patterns and give alerts. We recommend such monitoring as good operations practice for business critical Web-based services.

Google and Facebook Fret Over Anti-Prostitution Bill's Fallout


(Bloomberg) — Google and Facebook Inc. are among companies opposing a Senate bill aimed at squelching online trafficking of children, a stance that makes the Silicon Valley giants uneasy allies of a website accused of providing an advertising platform for teen prostitution.

The companies and tech trade groups say online providers will face greater liability for speech and videos posted by users if U.S. lawmakers move against Backpage.com and its online classified ads. Bill supporters disagree, saying the measure creates a narrow exception to deter lawbreakers and won’t harm the internet.

“There’s clearly a problem” as victims of sex trafficking advertised on Backpage repeatedly lose before judges who cite the federal immunity, said Yiota Souras, general counsel for the National Center for Missing & Exploited Children, a non-profit group. “Time and again victims are getting kicked out of court, even though there’s trafficking going on.”

See also: Cloudflare CEO Says Company Could Not Remain “Neutral” as it Bans Daily Stormer

The tech companies say they agree with the purpose of the law, but fear the unintended consequences. They want to preserve immunity they won from Congress two decades ago, after the brokerage dramatized in the film “The Wolf of Wall Street” sued an online service over critical comments posted on message boards.

Now at least 28 U.S. senators have signed onto the effort to retract some of that protection granted during the dawn of the commercial internet. A hearing on the bill is scheduled for Sept. 19; in recent days Oracle Corp. and 21st Century Fox Inc. have endorsed the measure.

The Internet Association, a Washington-based group with members including Alphabet Inc.’s Google, Facebook, Twitter Inc. and Snap Inc., said in an email that sex-trafficking is abhorrent and illegal. But, the group wrote, the bill is “overly broad” and “would create a new wave of frivolous and unpredictable actions against legitimate companies rather than addressing underlying criminal behavior.”

Google, asked for its position on the bill, referred to a blog post by its vice president of public policy, former U.S. Representative Susan Molinari. “While we agree with the intentions of the bill, we are concerned” the measure would hinder the fight against sex trafficking. Smaller web sites anxious to avoid liability for knowingly aiding sex traffickers might stop looking for and blocking such content, Molinari said.

“We — and many others — stand ready to work with Congress on changes to the bill, and on other legislation and measures to fight human trafficking and protect and support victims and survivors,” she wrote.

Nu Wexler, a Facebook spokesman, declined to comment.

At issue is the nine-lives existence of Backpage.com, called “the leading online marketplace for commercial sex” by a Senate investigative subcommittee. The website, with a look similar to the popular Craigslist classified site, contains listings that offer services localized by city, according to the report.

An adult category was pulled from the site as legislative scrutiny intensified, Senator Claire McCaskill, a Missouri Democrat, said at a January hearing. She asked whether the cessation marked the end “to Backpage’s role in online sex trafficking of children, or just a cheap publicity stunt.”

Backpage accepted ads that contained words such as “lolita,” “teen,” “innocent” and “school girl,” and before publishing stripped them of the terms to conceal that they indicated child sex trafficking, according to the Senate report. Still, ads find ways to indicate a child is being sold, for instance listing a “sweet young cheerleader” and a“new hottie” with “very low mileage,” according to a 2016 filing at the U.S. Supreme Court asking justices to hear a victim’s case against Backpage. The court denied a hearing.

The Dallas-based site, once part of the Village Voice Media group, has repeatedly fended off attempts by prosecutors and trafficking victims to shut it down, successfully arguing that the immunity conferred by Congress protects its activities. Still, legal scrutiny is a constant. A federal prosecutor in Arizona is conducting a grand jury investigation and indictments may result, Backpage told a Washington state court in a February filing.

Liz McDougall, general counsel for Backpage, didn’t supply a comment.

In one recent case, a California judge threw out charges of pimping, saying federal law shielding websites “even applies to those alleged to support the exploitation of others by human trafficking.”

The California judge cited another recent case, in which the same part of federal law was found to protect Facebook from lawsuits brought by terror victims who claimed the social media giant helped groups in the Middle East, such as Hamas, by giving them a platform to air their incendiary views.

The language at issue is part of the Communications Decency Act, passed by Congress in 1996. A portion of that law, Section 230, provides immunity to internet sites that publish content provided by another person or entity.

Lawmakers were spurred to action after website provider Prodigy Services Inc. lost a $200 million judgment in a lawsuit brought by Stratton Oakmont Inc. over online messages, including one that called the brokerage a “cult of brokers who either lie for a living or get fired.” Regulators eventually shut Stratton, and the exploits of its founder Jordan Belfort, which included cocaine use, cavorting with prostitutes and lying to customers, were retold in the 2013 movie “The Wolf of Wall Street.”

Congress in a legislative report said it was including Section 230 “to overrule Stratton-Oakmont v. Prodigy and any other similar decisions which have treated such providers and users as publishers or speakers of content that is not their own.”

Broad Interpretation

Since then judges have interpreted the statute broadly, and tech companies have come to depend upon it.

Now senators including Rob Portman, an Ohio Republican, and Democrats McCaskill and Richard Blumenthal want to narrow it. Their bill introduced Aug. 1 would eliminate federal liability protections for websites that assist, support, or facilitate violations of sex trafficking laws, and let state officials take actions against businesses that violate those laws.

Current law “was never intended to help protect sex traffickers who prey on the most innocent and vulnerable among us,” Portman said in a news release. He called the changes “narrowly crafted.”

Silicon Valley has been fighting the measure since before it was introduced. In a Nov. 14 letter to then-President-elect Donald Trump, the Internet Association trade group listed policy priorities including upholding Section 230, calling it “indispensable for the continued investment and growth in user-generated content platforms.”

In meetings over the summer with congressional staff, representatives of Google and Facebook argued against the bill and promised to oppose it, according to two people familiar with the gatherings who asked not to be identified because the meetings weren’t public. The companies declined an invitation to testify, said one of the people.

Free Speech

Bill opponents include the tech groups Center for Democracy & Technology, the Electronic Frontier Foundation and the rights group American Civil Liberties Union, which all signed an Aug. 4 letter to Senate leaders calling Section 230 as important as the First Amendment in supporting free speech online.

Matt Schruers, a vice president for law and policy at the Computer & Communications Industry Association, a trade group with Google and Facebook as members, said Section 230 in bumper-sticker terms amounts to, “Don’t Shoot the Messenger.” Undermining the provision could could chill online activities, he said.

‘You’ll see people exiting the market,” Schruers said in an interview. “You’ll see only the largest companies willing taking the risk.”

Schruers’s group was among 10 tech trade associations that warned in an Aug. 2 letter to senators that the bill would severely undermine Section 230, creating “a devastating impact on legitimate online services” by “allowing opportunistic trial lawyers to bring a deluge of frivolous litigation.”

Others scoff.

“It’s not even a major change in the law,” said Mary Leary, a law professor at Catholic University in Washington, D.C. “It’s just a clarification.”

How Serious is VMware About Open Source?


Brought to you by Data Center Knowledge

Media pundits everywhere seemed to be filled with surprise Wednesday, when VMware’s CEO and CTO were both singing praises to open source at VMworld Europe in Barcelona. With open source taking over the data center, even much of the proprietary software being built on code that started as open source, I’m not sure why that’s surprising. The world’s biggest data center software company has little choice but to embrace open source if it wishes to remain that way.

“When we look at the world of open source, it is very very powerful in its ability to produce innovation and cool ideas,” VMware’s CTO Ray O’Farrell said. “But it’s not the software itself, it’s the community that builds up and is able to leverage open source.”

The “community” of which he was speaking appears to be developers rather than users, although I’m sure he’s more than happy to embrace open source users who are wanting to include VMware in their plans. He mentioned that a year ago the company created an office under his jurisdiction that’s focused on working with the developer community.

“The bottom line is, we want to engage with this community more, and this is a great way for us to contribute to it,” he said.

“One of the biggest things we want to do is open up our own product APIs and build a gilt-edged opportunity for the open source community. We haven’t been great at that over the years, but we’re working on getting cleaner APIs out to open source community.”

Unlike some companies with proprietary DNA (Oracle comes to mind for some reason) VMware should have little trouble learning to work hand-in-hand with open source developers, if it sets its collective mind to it. Why? Because the company has quite a bit of open source DNA coarsing through its veins.

Cloud Foundry, for example, the multi-cloud application platform as a service that these days is developed by the Linux Foundation through its Cloud Foundry Foundation, was originally released in 2011 as an open source project by VMware, developed in-house by then CTO Derek Collison. Over the years, the company has also been a contributor to the Linux kernel and OpenStack and remains active in the Open vSwitch project, another Linux Foundation undertaking which develops and maintains a multilayer virtual switch used in network automation.

The company’s GitHub page also boasts of current open source projects, with a partial list of 15 projects that are “created and released by our engineers.” Included are Harbor, a container registry server based on Docker; Admiral, a scalable and lightweight container management platform; and Liota, an SDK for building secure IoT gateway data and control orchestration applications.

Often, a large software company making noises about building better relationships with open source developers will bring a sigh of skepticism from dyed-in-the-wool open source advocates. In this case, however, I’m inclined to give them the benefit of the doubt. But time will tell.

Experts Dispute VC's Forecast that Caused Data Center Stocks to Slump


Brought to you by Data Center Knowledge

The stocks of all seven US data center REITs (there are now six, following a merger that closed Thursday) slid down simultaneously this week, after a well-known venture capitalist and hedge-fund owner said at an investor conference that advances in processor technology will eventually lead to the demise of the data center provider industry.

But industry insiders say his views are overly simplistic, and that history has shown that advances in computing technology only create more hunger for data center capacity, not less.

Since server chips are getting smaller and more powerful than ever, companies in the future will not need anywhere near the amount of data center space they need today, Chamath Palihapitiya, founder and CEO of the VC firm Social Capital, who last year also launched a hedge fund, said Tuesday afternoon, according to Seeking Alpha, which cited Bloomberg as the source:

Word that Google may have developed its own chip that can run 50% of its computing on 10% of the silicon has him reading that “We can literally take a rack of servers that can basically replace seven or eight data centers and park it, drive it in an RV and park it beside a data center. Plug it into some air conditioning and power and it will take those data centers out of business.”

Following the event, called Delivering Alpha and produced by CNBC and Institutional Investor, stocks of data center providers Digital Realty Trust, Equinix, QTS, CyrusOne, CoreSite, DuPont Fabros Technology, and Iron Mountain were down, some just over 2 percent and others over 3 percent.

Alphabet subsidiary Google did release a paper this past April that said its custom Tensor Processing Unit chips, developed in-house, allowed it to avoid building additional data centers specifically for executing neural networks (the dominant type of computing system for AI), but the company said nothing about the implications of TPUs for other types of workloads, which collectively far outstrip neural nets in terms of total computing capacity they require.

But it also revealed in April that it’s been using TPUs to run machine learning workloads in its data centers since 2015. Meanwhile, cloud companies as a group (which includes Google) are spending more on Intel chips. Arrival of the TPU has not slowed Google’s investment in data centers; quite the opposite. Since the release of the paper, Google announced new cloud data centers in Northern Virginia, Oregon, Singapore, Australia, England, and, just earlier this week (on the same day Palihapitiya made his remarks) in Germany. The company uses a mixed data center strategy, building some of its data centers on its own and leasing the rest from the types of companies whose stocks Palihapitiya’s remarks set in motion.

One of those companies is San Francisco-based Digital Realty, whose shares were down 3.6 percent at one point Wednesday. John Stewart, the company’s senior VP of investor relations, said that nearly every phone call and meeting with institutional investors Wednesday and Thursday started with the investor asking about what the VC had said.

“Andy [Power, the company’s CFO] and I are in New York, meeting with our largest institutional investors, and this topic has come up as basically the first question every single meeting,” Stewart said in a phone interview Thursday.

Worries about advances in computing technology driving down demand for data center space aren’t new; it’s a concern that data center company executives have had to address periodically for many years. Computer chips powering data centers that were built in the last several years are denser (in terms of the number of cores per square centimeter) and more powerful than they’ve ever been; but during the same time, data center providers have seen a boom in demand unprecedented in scale, as companies like Google, Microsoft, Amazon, Oracle, and Uber have been ramping up investment in new data center capacity, some to support their quickly growing enterprise cloud businesses, and some to support growth in the number of individual consumers who use their apps.

Customers including IBM, Google, Apple, Microsoft, Oracle, and Amazon “are spending billions of dollars on incremental new data center CapEx, and they are doing that and signing leases with us for 10 to 15 years,” Power said. “They don’t think their data center’s going to go away.”

Bill Stoller, a financial writer and analyst and regular DCK contributor, said people who run data centers for these large companies are in the position to know the most about their companies’ future demand for data center capacity. “They are entering into long-term contracts for facilities built with today’s technology for cooling and electrical capacity,” he said. “Why would they be entering into 10-plus-year leases if this technology was obsolete. They are on the cutting edge.”

Technological progress has created numerous massive leaps in improving computing efficiency – even outside of semiconductor progress described by Moore’s Law (a growth curve that’s in fact flattening) — the most recent ones being server virtualization and cloud computing. Neither of those leaps caused a drop in demand for data center space. Outcome of such leaps has been the opposite: more efficient computing has opened up possibilities for new applications that can take advantage of the improvements, driving demand further.

Recent advances in AI, driven to a great extent by the lower cost of processors that can run neural networks, are creating more demand for computing capacity. Servers filled with specialized chips used specifically to train and/or execute neural networks, such as Google’s TPUs, or Nvidia’s GPUs (the most widely used processors for training workloads) require more power per square foot in a data center than CPUs that run most of the world’s software. They are not replacing regular servers in data centers; they’re being installed in addition to them.

“Those higher-density racks generate more heat; they require more cooling; and these are special applications for high-performance computing,” Stoller said. In the vast majority of cases, rack densities are much lower.

Rack density indicates the amount of computing power that can be housed in a single rack and has direct implications for the amount of real estate required to host software. The data center provider business isn’t just about selling space, however; it’s also about selling power, the ability to cool equipment (the higher the density, the more cooling capacity is required for a single rack), and access to networks.

Steven Rubis, VP of investor relations at DuPont Fabros Technology, the data center REIT that specializes in providing wholesale data center space to hyper-scale giants like Facebook, Microsoft, and others, said Palihapitiya’s statements were “an oversimplification. There’s probably more nuance to it; we get this argument from investors all the time.”