Posted:


[Cross-posted from the Official Google Blog]

Today is Safer Internet Day, a moment for technology companies, nonprofit organizations, security firms, and people around the world to focus on online safety, together. To mark the occasion, we’re rolling out new tools, and some useful reminders, to help protect you from online dangers of all stripes—phishing, malware, and other threats to your personal information.

1. Keeping security settings simple

The Security Checkup is a quick way to control the security settings for your Google Account. You can add a recovery phone number so we can help if you’re ever locked out of your account, strengthen your password settings, see which devices are connected to your account, and more. If you complete the Security Checkup by February 11, you’ll also get 2GB of extra Google Drive storage, which can be used across Google Drive, Gmail, and Photos.
Safer Internet Day is a great time to do it, but you can—and should!—take a Security Checkup on a regular basis. Start your Security Checkup by visiting My Account.

2. Informing Gmail users about potentially unsafe messages

If you and your Grandpa both use Gmail to exchange messages, your connections are encrypted and authenticated. That means no peering eyes can read those emails as they zoom across the web, and you can be confident that the message from your Grandpa in size 48 font (with no punctuation and a few misspellings) is really from him!

However, as our Safer Email Transparency Report explains, these things are not always true when Gmail interacts with other mail services. Today, we’re introducing changes in Gmail on the web to let people know when a received message was not encrypted, if you’re composing a message to a recipient whose email service doesn’t support TLS encryption, or when the sender’s domain couldn’t be authenticated.

Here’s the notice you’ll see in Gmail before you send a message to a service that doesn’t support TLS encryption. You’ll also see the broken lock icon if you receive a message that was sent without TLS encryption.
If you receive a message that can’t be authenticated, you’ll see a question mark where you might otherwise see a profile photo or logo:


3. Protecting you from bad apps

Dangerous apps that phish and steal your personal information, or hold your phone hostage and make you pay to unlock it, have no place on your smartphone—or any device, for that matter.

Google Play helps protect your Android device by rejecting bad apps that don’t comply with our Play policies. It also conducts more than 200 million daily security scans of devices, in tandem with our Safe Browsing system, for any signs of trouble. Last year, bad apps were installed on fewer than 0.13% of Android devices that install apps only from Google Play.

Learn more about these, and other Android security features — like app sandboxing, monthly security updates for Nexus and other devices, and our Security Rewards Program—in new research we’ve made public on our Android blog.

4. Busting bad advertising practices

Malicious advertising “botnets” try to send phony visitors to websites to make money from online ads. Botnets threaten the businesses of honest advertisers and publishers, and because they’re often made up of devices infected with malware, they put users in harm’s way too.

We've worked to keep botnets out of our ads systems, cutting them out of advertising revenue, and making it harder to make money from distributing malware and Unwanted Software. Now, as part of our effort to fight bad ads online, we’re reinforcing our existing botnet defenses by automatically filtering traffic from three of the top ad fraud botnets, comprising more than 500,000 infected user machines. Learn more about this update on the Doubleclick blog.

5. Moving the security conversation forward

Recent events—Edward Snowden’s disclosures, the Sony Hack, the current conversation around encryption, and more—have made online safety a truly mainstream issue. This is reflected both in news headlines, and popular culture: “Mr. Robot,” a TV series about hacking and cybersecurity, just won a Golden Globe for Best Drama, and @SwiftOnSecurity, a popular security commentator, is named after Taylor Swift.

But despite this shift, security remains a complex topic that lends itself to lively debates between experts...that are often unintelligible to just about everyone else. We need to simplify the way we talk about online security to enable everyone to understand its importance and participate in this conversation.

To that end, we’re teaming up with Medium to host a virtual roundtable about online security, present and future. Moderated by journalist and security researcher Kevin Poulsen, this project aims to present fresh perspectives about online security in a time when our attention is increasingly ruled by the devices we carry with us constantly. We hope you’ll tune in and check it out.

Online security and safety are being discussed more often, and with more urgency, than ever before. We hope you’ll take a few minutes today to learn how Google protects your data and how we can work toward a safer web, for everyone.

Posted:


In November, we announced that Safe Browsing would protect you from social engineering attacks - deceptive tactics that try to trick you into doing something dangerous, like installing unwanted software or revealing your personal information (for example, passwords, phone numbers, or credit cards). You may have encountered social engineering in a deceptive download button, or an image ad that falsely claims your system is out of date. Today, we’re expanding Safe Browsing protection to protect you from such deceptive embedded content, like social engineering ads.
Consistent with the social engineering policy we announced in November, embedded content (like ads) on a web page will be considered social engineering when they either:

  • Pretend to act, or look and feel, like a trusted entity — like your own device or browser, or the website itself. 
  • Try to trick you into doing something you’d only do for a trusted entity — like sharing a password or calling tech support.

Below are some examples of deceptive content, shown via ads:
This image claims that your software is out-of-date to trick you into clicking “update”. 

This image mimics a dialogue from the FLV software developer -- but it does not actually originate from this developer.

These buttons seem like they will produce content that relate to the site (like a TV show or sports video stream) by mimicking the site’s look and feel. They are often not distinguishable from the rest of the page.

Our fight against unwanted software and social engineering is still just beginning. We'll continue to improve Google's Safe Browsing protection to help more people stay safe online.

Will my site be affected?

If visitors to your web site consistently see social engineering content, Google Safe Browsing may warn users when they visit the site. If your site is flagged for containing social engineering content, you should troubleshoot with Search Console. Check out our social engineering help for webmasters.

Posted:


We launched our Vulnerability Reward Program in 2010 because rewarding security researchers for their hard work benefits everyone. These financial rewards help make our services, and the web as a whole, safer and more secure.

With an open approach, we’re able to consider a broad diversity of expertise for individual issues. We can also offer incentives for external researchers to work on challenging, time-consuming, projects that otherwise may not receive proper attention.

Last January, we summarized these efforts in our first ever Security Reward Program ‘Year in Review’. Now, at the beginning of another new year, we wanted to look back at 2015 and again show our appreciation for researchers’ important contributions.

2015 at a Glance

Once again, researchers from around the world—Great Britain, Poland, Germany, Romania, Israel, Brazil, United States, China, Russia, India to name a few countries—participated our program.

Here's an overview of the rewards they received and broader milestones for the program, as a whole.
Android Joins Security Rewards

Android was a newcomer to the Security Reward program initiative in 2015 and it made a significant and immediate impact as soon as it joined the program.

We launched our Android VRP in June, and by the end of 2015, we had paid more than $200,000 to researchers for their work, including our largest single payment of $37,500 to an Android security researcher.

New Vulnerability Research Grants Pay Off

Last year, we began to provide researchers with Vulnerability Research Grants, lump sums of money that researchers receive before starting their investigations. The purpose of these grants is to ensure that researchers are rewarded for their hard work, even if they don’t find a vulnerability.

We’ve already seen positive results from this program; here’s one example. Kamil Histamullin a researcher from Kasan, Russia received a VRP grant early last year. Shortly thereafter, he found an issue in YouTube Creator Studio which would have enabled anyone to delete any video from YouTube by simply changing a parameter from the URL. After the issue was reported, our teams quickly fixed it and the researcher was was rewarded $5,000 in addition to his initial research grant. Kamil detailed his findings on his personal blog in March.

Established Programs Continue to Grow

We continued to see important security research in our established programs in 2015. Here are just a few examples:
  • Tomasz Bojarski found 70 bugs on Google in 2015, and was our most prolific researcher of the year. He found a bug in our vulnerability submission form.
  • You may have read about Sanmay Ved, a researcher from who was able to buy google.com for one minute on Google Domains. Our initial financial reward to Sanmay—$ 6,006.13—spelled-out Google, numerically (squint a little and you’ll see it!). We then doubled this amount when Sanmay donated his reward to charity.
We also injected some new energy into these existing research programs and grants. In December, we announced that we'd be dedicating one million dollars specifically for security research related to Google Drive.

We’re looking forward to continuing the Security Reward Program’s growth in 2016. Stay tuned for more exciting reward program changes throughout the year.

Posted:


[Cross-posted from the Google Research Blog]

Last August, we announced USENIX Enigma, a new conference intended to shine a light on great, thought-provoking research in security, privacy, and electronic crime. With Enigma beginning in just a few short weeks, I wanted to share a couple of the reasons I’m personally excited about this new conference.

Enigma aims to bridge the divide that exists between experts working in academia, industry, and public service, explicitly bringing researchers from different sectors together to share their work. Our speakers include those spearheading the defense of digital rights (Electronic Frontier Foundation, Access Now), practitioners at a number of well known industry leaders (Akamai, Blackberry, Facebook, LinkedIn, Netflix, Twitter), and researchers from multiple universities in the U.S. and abroad. With the diverse session topics and organizations represented, I expect interesting—and perhaps spirited—coffee break and lunchtime discussions among the equally diverse list of conference attendees.

Of course, I’m very proud to have some of my Google colleagues speaking at Enigma:
  • Adrienne Porter Felt will talk about blending research and engineering to solve usable security problems. You’ll hear how Chrome’s usable security team runs user studies and experiments to motivate engineering and design decisions. Adrienne will share the challenges they’ve faced when trying to adapt existing usable security research to practice, and give insight into how they’ve achieved successes.
  • Ben Hawkes will be speaking about Project Zero, a security research team dedicated to the mission of, “making 0day hard.” Ben will talk about why Project Zero exists, and some of the recent trends and technologies that make vulnerability discovery and exploitation fundamentally harder.
  • Kostya Serebryany will be presenting a 3-pronged approach to securing C++ code based on his many years of experiencing wrangling complex, buggy software. Kostya will survey multiple dynamic sanitizing tools him and his team have made publicly available, review control-flow and data-flow guided fuzzing, and explain a method to harden your code in the presence of any bugs that remain.
  • Elie Bursztein will go through key lessons the Gmail team learned over the past 11 years while protecting users from spam, phishing, malware, and web attacks. Illustrated with concrete numbers and examples from one of the largest email systems on the planet, attendees will gain insight into specific techniques and approaches useful in fighting abuse and securing their online services.
In addition to raw content, my Program Co-Chair, David Brumley, and I have prioritized talk quality. Researchers dedicate months or years of their time to thinking about a problem and conducting the technical work of research, but a common criticism of technical conferences is that the actual presentation of that research seems like an afterthought. Rather than be a regurgitation of a research paper in slide format, a presentation is an opportunity for a researcher to explain the context and impact of their work in their own voice; a chance to inspire the audience to want to learn more or dig deeper. Taking inspiration from the TED conference, Enigma will have shorter presentations, and the program committee has worked with each speaker to help them craft the best version of their talk. 

Hope to see some of you at USENIX Enigma later this month!

Posted:


As announced last September and supported by further recent research, Google Chrome does not treat SHA-1 certificates as secure anymore, and will completely stop supporting them over the next year. Chrome will discontinue support in two steps: first, blocking new SHA-1 certificates; and second, blocking all SHA-1 certificates.

Step 1: Blocking new SHA-1 certificates

Starting in early 2016 with Chrome version 48, Chrome will display a certificate error if it encounters a site with a leaf certificate that:

  1. is signed with a SHA-1-based signature
  2. is issued on or after January 1, 2016
  3. chains to a public CA
We are hopeful that no one will encounter this error, since public CAs must stop issuing SHA-1 certificates in 2016 per the Baseline Requirements for SSL.

In addition, a later version of Chrome in 2016 may extend these criteria in order to help guard against SHA-1 collision attacks on older devices, by displaying a certificate error for sites with certificate chains that: 
  1. contain an intermediate or leaf certificate signed with a SHA-1-based signature
  2. contain an intermediate or leaf certificate issued on or after January 1, 2016
  3. chain to a public CA
(Note that the first two criteria can match different certificates.)

Note that sites using new SHA-1 certificates that chain to local trust anchors (rather than public CAs) will continue to work without a certificate error. However, they will still be subject to the UI downgrade specified in our original announcement.

Step 2: Blocking all SHA-1 certificates

Starting January 1, 2017 at the latest, Chrome will completely stop supporting SHA-1 certificates. At this point, sites that have a SHA-1-based signature as part of the certificate chain (not including the self-signature on the root certificate) will trigger a fatal network error. This includes certificate chains that end in a local trust anchor as well as those that end at a public CA.

In line with Microsoft Edge and Mozilla Firefox, the target date for this step is January 1, 2017, but we are considering moving it earlier to July 1, 2016 in light of ongoing research. We therefore urge sites to replace any remaining SHA-1 certificates as soon as possible.

Note that Chrome uses the certificate trust settings of the host OS where possible, and that an update such as Microsoft’s planned change will cause a fatal network error in Chrome, regardless of Chrome’s intended target date.

Keeping your site safe and compatible

As individual TLS features are found to be too weak, browsers need to drop support for those features to keep users safe. Unfortunately, SHA-1 certificates are not the only feature that browsers will remove in the near future.

As we announced on our security-dev mailing list, Chrome 48 will also stop supporting RC4 cipher suites for TLS connections. This aligns with timelines for Microsoft Edge and Mozilla Firefox.

For security and interoperability in the face of upcoming browser changes, site operators should ensure that their servers use SHA-2 certificates, support non-RC4 cipher suites, and follow TLS best practices. In particular, we recommend that most sites support TLS 1.2 and prioritize the ECDHE_RSA_WITH_AES_128_GCM cipher suite. We also encourage site operators to use tools like the SSL Labs server test and Mozilla's SSL Configuration Generator.

Posted:


[Cross-posted from the Webmaster Central Blog]

At Google, user security has always been a top priority. Over the years, we’ve worked hard to promote a more secure web and to provide a better browsing experience for users. Gmail, Google search, and YouTube have had secure connections for some time, and we also started giving a slight ranking boost to HTTPS URLs in search results last year. Browsing the web should be a private experience between the user and the website, and must not be subject to eavesdropping, man-in-the-middle attacks, or data modification. This is why we’ve been strongly promoting HTTPS everywhere.

As a natural continuation of this, today we'd like to announce that we're adjusting our indexing system to look for more HTTPS pages. Specifically, we’ll start crawling HTTPS equivalents of HTTP pages, even when the former are not linked to from any page. When two URLs from the same domain appear to have the same content but are served over different protocol schemes, we’ll typically choose to index the HTTPS URL if:

  • It doesn’t contain insecure dependencies.
  • It isn’t blocked from crawling by robots.txt.
  • It doesn’t redirect users to or through an insecure HTTP page.
  • It doesn’t have a rel="canonical" link to the HTTP page.
  • It doesn’t contain a noindex robots meta tag.
  • It doesn’t have on-host outlinks to HTTP URLs.
  • The sitemaps lists the HTTPS URL, or doesn’t list the HTTP version of the URL.
  • The server has a valid TLS certificate.

Although our systems prefer the HTTPS version by default, you can also make this clearer for other search engines by redirecting your HTTP site to your HTTPS version and by implementing the HSTS header on your server.

We’re excited about taking another step forward in making the web more secure. By showing users HTTPS pages in our search results, we’re hoping to decrease the risk for users to browse a website over an insecure connection and making themselves vulnerable to content injection attacks. As usual, if you have any questions or comments, please let us know in the comments section below or in our webmaster help forums.

Posted:


Over the course of the coming weeks, Google will be moving to distrust the “Class 3 Public Primary CA” root certificate operated by Symantec Corporation, across Chrome, Android, and Google products. We are taking this action in response to a notification by Symantec Corporation that, as of December 1, 2015, Symantec has decided that this root will no longer comply with the CA/Browser Forum’s Baseline Requirements. As these requirements reflect industry best practice and are the foundation for publicly trusted certificates, the failure to comply with these represents an unacceptable risk to users of Google products.

Symantec has informed us they intend to use this root certificate for purposes other than publicly-trusted certificates. However, as this root certificate will no longer adhere to the CA/Browser Forum’s Baseline Requirements, Google is no longer able to ensure that the root certificate, or certificates issued from this root certificate, will not be used to intercept, disrupt, or impersonate the secure communication of Google’s products or users. As Symantec is unwilling to specify the new purposes for these certificates, and as they are aware of the risk to Google’s users, they have requested that Google take preventative action by removing and distrusting this root certificate. This step is necessary because this root certificate is widely trusted on platforms such as Android, Windows, and versions of OS X prior to OS X 10.11, and thus certificates Symantec issues under this root certificate would otherwise be treated as trustworthy.

Symantec has indicated that they do not believe their customers, who are the operators of secure websites, will be affected by this removal. Further, Symantec has also indicated that, to the best of their knowledge, they do not believe customers who attempt to access sites secured with Symantec certificates will be affected by this. Users or site operators who encounter issues with this distrusting and removal should contact Symantec Technical Support.

Further Technical Details of Affected Root:
Friendly Name: Class 3 Public Primary Certification Authority
Subject: C=US, O=VeriSign, Inc., OU=Class 3 Public Primary Certification Authority
Public Key Hash (SHA-1): E2:7F:7B:D8:77:D5:DF:9E:0A:3F:9E:B4:CB:0E:2E:A9:EF:DB:69:77
Public Key Hash (SHA-256):
B1:12:41:42:A5:A1:A5:A2:88:19:C7:35:34:0E:FF:8C:9E:2F:81:68:FE:E3:BA:18:7F:25:3B:C1:A3:92:D7:E2

MD2 Version
Fingerprint (SHA-1): 74:2C:31:92:E6:07:E4:24:EB:45:49:54:2B:E1:BB:C5:3E:61:74:E2
Fingerprint (SHA-256): E7:68:56:34:EF:AC:F6:9A:CE:93:9A:6B:25:5B:7B:4F:AB:EF:42:93:5B:50:A2:65:AC:B5:CB:60:27:E4:4E:70

SHA1 Version
Fingerprint (SHA-1): A1:DB:63:93:91:6F:17:E4:18:55:09:40:04:15:C7:02:40:B0:AE:6B
Fingerprint (SHA-256): A4:B6:B3:99:6F:C2:F3:06:B3:FD:86:81:BD:63:41:3D:8C:50:09:CC:4F:A3:29:C2:CC:F0:E2:FA:1B:14:03:05

Posted:


“At least 2 or 3 times a week I get a big blue warning screen with a loud voice telling me that I’ve a virus and to call the number at the end of the big blue warning.”
“I’m covered with ads and unwanted interruptions. what’s the fix?”
“I WORK FROM HOME AND THIS POPING [sic] UP AND RUNNING ALL OVER MY COMPUTER IS NOT RESPECTFUL AT ALL THANK YOU.”

Launched in 2007, Safe Browsing has long helped protect people across the web from well-known online dangers like phishing and malware. More recently, however, we’ve seen an increase in user complaints like the ones above. These issues and others—hijacked browser settings, software installed without users' permission that resists attempts to uninstall—have signaled the rise of a new type of malware that our systems haven’t been able to reliably detect.

More than a year ago, we began a broad fight against this category of badness that we now call “Unwanted Software”, or “UwS” (pronounced “ooze”). Today, we wanted to share some progress and outline the work that must happen in order to continue protecting users across the web.

What is UwS and how does it get on my computer?

In order to combat UwS, we first needed to define it. Despite lots of variety, our research enabled us to develop a defining list of characteristics that this type of software often displays:

  • It is deceptive, promising a value proposition that it does not meet.
  • It tries to trick users into installing it or it piggybacks on the installation of another program.
  • It doesn’t tell the user about all of its principal and significant functions.
  • It affects the user’s system in unexpected ways.
  • It is difficult to remove.
  • It collects or transmits private information without the user’s knowledge.
  • It is bundled with other software and its presence is not disclosed.

Next, we had to better understand how UwS is being disseminated.

This varies quite a bit, but time and again, deception is at the heart of these tactics. Common UwS distribution tactics include: unwanted ad injection, misleading ads such as “trick-to-click”, ads disguised as ‘download’ or ‘play’ buttons, bad software downloader practices, misleading or missing disclosures about what the software does, hijacked browser default settings, annoying system pop-up messages, and more.

Here are a few specific examples:
Deceptive ads leading to UwS downloads
Ads from unwanted ads injector taking over a New York Times page and sending the user to phone scams
Unwanted ad injector inserts ads on the Google search results page
New tab page is overridden by UwS
UwS hijacks Chrome navigations and directs users to a scam tech support website

One year of progress

Because UwS touches so many different parts of people’s online experiences, we’ve worked to fight it on many different fronts. Weaving UwS detection into Safe Browsing has been critical to this work, and we’ve pursued other efforts as well—here’s an overview:
  • We now include UwS in Safe Browsing and its API, enabling people who use Chrome and other browsers to see warnings before they go to sites that contain UwS. The red warning below appears in Chrome.

It’s still early, but these changes have already begun to move the needle.
  • UwS-related Chrome user complaints have fallen. Last year, before we rolled-out our new policies, these were 40% of total complaints and now they’re 20%.
  • We’re now showing more than 5 million Safe Browsing warnings per day on Chrome related to UwS to ensure users are aware of a site’s potential risks.
  • We helped more than 14 million users remove over 190 deceptive Chrome extensions from their devices.
  • We reduced the number of UwS warnings that users see via AdWords by 95%, compared to last year. Even prior to last year, less than 1% of UwS downloads were due to AdWords.

However, there is still a long way to go. 20% of all feedback from Chrome users is related to UwS and we believe 1 in 10 Chrome users have hijacked settings or unwanted ad injectors on their machines. We expect users of other browsers continue to suffer from similar issues; there is lots of work still to be done.

Looking ahead: broad industry participation is essential

Given the complexity of the UwS ecosystem, the involvement of players across the industry is key to making meaningful progress in this fight. This chain is only as strong as its weakest links: everyone must work to develop and enforce strict, clear policies related to major sources of UwS.

If we’re able, as an industry, to enforce these policies, then everyone will be able to provide better experiences for their users. With this in mind, we’re very pleased to see that the FTC recently warned consumers about UwS and characterizes UwS as a form of malware. This is an important step toward uniting the online community and focusing good actors on the common goal of eliminating UwS.

We’re still in the earliest stages of the fight against UwS, but we’re moving in the right direction. We’ll continue our efforts to protect users from UwS and work across the industry to eliminate these bad practices.

Posted:


Authenticator for Android is used by millions of users and, combined with 2-Step Verification, it provides an extra layer of protection for Google Accounts.

Our latest version has some cool new features. You will notice a new icon and a refreshed design. There's also support for Android Wear devices, so you'll be able to get verification codes from compatible devices, like your watch.
The new Authenticator also comes with a developer preview of support for NFC Security Key, based on the FIDO Universal 2nd Factor (U2F) protocol via NFC. Play Store will prompt for the NFC permission before you install this version of Authenticator.

Developers who want to learn more about U2F can refer to FIDO's specifications. Additionally, you can try it out at https://u2fdemo.appspot.com. Note that you'll need an Android device running the latest versions of Google Chrome and Authenticator and also a Security Key with NFC support.

You can find the latest Authenticator for Android on the Play Store.

Posted:



Google Safe Browsing has been protecting well over a billion desktop users against malware, unwanted software, and social engineering sites on the web for years. Today, we’re pleased to announce that we’ve extended our protective umbrella to hundreds of millions of Chrome users on Android.

How To Get It

If you’re an Android user, you probably already have it! This new Safe Browsing client on Android is part of Google Play Services, starting with version 8.1. The first app to use it is Chrome, starting with version 46—we’re now protecting all Android Chrome users by default. If you look at Chrome’s Settings > Privacy menu, you can verify that Safe Browsing is enabled and that you’re protected. Chrome warns you about dangerous sites as shown below. It does this while preserving your privacy, just like on desktop.

What Came Before

The Android platform and the Play Store have long had protection against potentially harmful apps. And as our adversaries have improved their skills in trying to evade us, we’ve improved our detection, keeping Android app users safe. But not all dangers to mobile users come from apps.

What’s New
Social engineering—and phishing in particular—requires different protection; we need to keep an up-to-date list of bad sites on the device to make sure we can warn people before they browse into a trap. Providing this protection on a mobile device is much more difficult than on a desktop system, in no small part because we have to make sure that list doesn’t get stale, yet:

  • Mobile data costs money for most users around the world. Data size matters a lot.
  • Mobile data speeds are slower than Wi-Fi in much of the world. Data size matters a lot.
  • Cellular connectivity quality is much more uneven, so getting the right data to the device quickly is critically important. Data size matters a lot.

Maximum Protection Per Bit

Bytes are big: our mantra is that every single bit that Safe Browsing sends a mobile device must improve protection. Network bandwidth and battery are the scarcest resources on a mobile device, so we had to carefully rethink how to best protect mobile users. Some social engineering attacks only happen in certain parts of the world, so we only send information that protects devices in the geographic regions they’re in.

We also make sure that we send information about the riskiest sites first: if we can only get a very short update through, as is often the case on lower-speed networks in emerging economies, the update really has to count. We also worked with Google’s compression team to make the little data that we do send as small as possible.

Together with the Android Security team, we made the software on the device extra stingy with memory and processor use, and careful about minimizing network traffic. All of these details matter to us; we must not waste our users’ data plans, or a single moment of their battery life.

More Mobile

We hunt badness on the Internet so that you don’t discover it the hard way, and our protection should never be an undue burden on your networking costs or your device’s battery. As more of the world relies on the mobile web, we want to make sure you’re as safe as can be, as efficiently as possible.