SRE fundamentals: SLIs, SLAs and SLOs [Published articles]

By Jay Judkowitz, Senior Product Manager and Mark Carter, Group Product Manager

Next week at Google Cloud Next ‘18, you’ll be hearing about new ways to think about and ensure the availability of your applications. A big part of that is establishing and monitoring service-level metrics—something that our Site Reliability Engineering (SRE) team does day in and day out here at Google. Our SRE principles have as their end goal to improve services and in turn the user experience, and next week we’ll be discussing some new ways you can incorporate SRE principles into your operations.

In fact, a recent Forrester report on infrastructure transformation offers details on how you can apply these SRE principles at your company—more easily than you might think. They found that enterprises can apply most SRE principles either directly or with minor modification.

To learn more about applying SRE in your business, we invite you to join Ben Treynor, head of Google SRE, who will be sharing some exciting announcements and walking through real-life SRE scenarios at his Next ‘18 Spotlight session. Register now as seats are limited.

The concept of SRE starts with the idea that metrics should be closely tied to business objectives. We use several essential tools—SLO, SLA and SLI—in SRE planning and practice.

Defining the terms of site reliability engineering

These tools aren’t just useful abstractions. Without them, you cannot know if your system is reliable, available or even useful. If they don’t tie explicitly back to your business objectives, then you don’t have data on whether the choices you make are helping or hurting your business.

As a refresher, here’s a look at SLOs, SLAs, and SLIS, as discussed by AJ Ross, Adrian Hilton and Dave Rensin of our Customer Reliability Engineering team, in the January 2017 blog post, SLOs, SLIs, SLAs, oh my - CRE life lessons.

1. Service-Level Objective (SLO)
SRE begins with the idea that a prerequisite to success is availability. A system that is unavailable cannot perform its function and will fail by default. Availability, in SRE terms, defines whether a system is able to fulfill its intended function at a point in time. In addition to being used as a reporting tool, the historical availability measurement can also describe the probability that your system will perform as expected in the future.

When we set out to define the terms of SRE, we wanted to set a precise numerical target for system availability. We term this target the availability Service-Level Objective (SLO) of our system. Any discussion we have in the future about whether the system is running sufficiently reliably and what design or architectural changes we should make to it must be framed in terms of our system continuing to meet this SLO. 
Keep in mind that the more reliable the service, the more it costs to operate. Define the lowest level of reliability that you can get away with for each service, and state that as your SLO. Every service should have an availability SLO—without it, your team and your stakeholders cannot make principled judgments about whether your service needs to be made more reliable (increasing cost and slowing development) or less reliable (allowing greater velocity of development). Excessive availability can become a problem because now it’s the expectation. Don’t make your system overly reliable if you don’t intend to commit to it to being that reliable. 
Within Google, we implement periodic downtime in some services to prevent a service from being overly available. You might also try experimenting with planned-downtime exercises with front-end servers occasionally, as we did with one of our internal systems. We found that these exercises can uncover services that are using those servers inappropriately. With that information, you can then move workloads to somewhere more suitable and keep servers at the right availability level.
2. Service-Level Agreement (SLA)
At Google, we distinguish between an SLO and a Service-Level Agreement (SLA). An SLA normally involves a promise to someone using your service that its availability SLO should meet a certain level over a certain period, and if it fails to do so then some kind of penalty will be paid. This might be a partial refund of the service subscription fee paid by customers for that period, or additional subscription time added for free. The concept is that going out of SLO is going to hurt the service team, so they will push hard to stay within SLO. If you’re charging your customers money, you will probably need an SLA.

Because of this, and because of the principle that availability shouldn’t be much better than the SLO, the availability SLO in the SLA is normally a looser objective than the internal availability SLO. This might be expressed in availability numbers: for instance, an availability SLO of 99.9% over one month, with an internal availability SLO of 99.95%. Alternatively, the SLA might only specify a subset of the metrics that make up the internal SLO. 
If you have an SLO in your SLA that is different from your internal SLO, as it almost always is, it’s important for your monitoring to measure SLO compliance explicitly. You want to be able to view your system’s availability over the SLA calendar period, and easily see if it appears to be in danger of going out of SLO. You will also need a precise measurement of compliance, usually from logs analysis. Since we have an extra set of obligations (described in the SLA) to paying customers, we need to measure queries received from them separately from other queries. That’s another benefit of establishing an SLA—it’s an unambiguous way to prioritize traffic.

When you define your SLA’s availability SLO, you need to be extra-careful about which queries you count as legitimate. For example, if a customer goes over quota because they released a buggy version of their mobile client, you may consider excluding all “out of quota” response codes from your SLA accounting. 
3. Service-Level Indicator (SLI)
We also have a direct measurement of a service’s behavior: the frequency of successful probes of our system. This is a Service-Level Indicator (SLI). When we evaluate whether our system has been running within SLO for the past week, we look at the SLI to get the service availability percentage. If it goes below the specified SLO, we have a problem and may need to make the system more available in some way, such as running a second instance of the service in a different city and load-balancing between the two. 
If you want to know how reliable your service is, you must be able to measure the rates of successful and unsuccessful queries as your SLIs.

Since the original post was published, we’ve made some updates to Stackdriver that let you incorporate SLIs even more easily into your Google Cloud Platform (GCP) workflows. You can now combine your in-house SLIs with the SLIs of the GCP services that you use, all in the same Stackdriver monitoring dashboard. At Next ‘18, the Spotlight session with Ben Treynor and Snapchat will illustrate how Snap uses its dashboard to get insight into what matters to its customers and map it directly to what information it gets from GCP, for an in-depth view of customer experience.
Automatic dashboards in Stackdriver for GCP services enable you to group several ways: per service, per method and per response code any of the 50th, 95th and 99th percentile charts. You can also see latency charts on log scale to quickly find outliers.  

If you’re building a system from scratch, make sure that SLIs and SLOs are part of your system requirements. If you already have a production system but don’t have them clearly defined, then that’s your highest priority work. If you’re coming to Next ‘18, we look forward to seeing you there.

See related content:

Classic Sci-Fi Star Systems Keep Getting Ruined by Science [Published articles]

Having recently discussed some possible SF solutions to the vexing problems posed by red dwarf stars, it makes a certain amount of sense to consider the various star systems that have served as popular settings for some classic science fiction—even if science has more or less put the kibosh on any real hope of finding a habitable planet in the bunch.

In olden days, back before we had anything like the wealth of information about exoplanets we have now1, SF authors playing it safe often decided to exclude the systems of pesky low-mass stars (M class) and short lived high-mass stars (O, B, and A) as potential abodes of life. A list of promising nearby stars might have looked a bit like this2

 

Star System Distance from Sol
(light-years)
Class Notes
Sol 0 G2V
Alpha Centauri A & B 4.3 G2V & K1V We do not speak of C
Epsilon Eridani 10.5 K2V
Procyon A & B 11.4 F5V – IV & DA
61 Cygni A & B 11.4 K5V & K7V
Epsilon Indi 11.8 K5V
Tau Ceti 11.9 G8V

After Tau Ceti, there’s something of a dearth of K to F class stars until one reaches 40 Eridani at about 16 light-years, about which more later. And because it is a named star with which readers might be familiar, sometimes stories were set in the unpromising Sirius system; more about it later, as well.

There are a lot of SF novels, particularly ones of a certain vintage, that feature that particular set of stars. If one is of that vintage (as I am), Alpha Centauri, Epsilon Indi, Epsilon Eridani, Procyon, and Tau Ceti are old friends, familiar faces about whom one might comment favourably when it turns out, for example, that they are orbited by a pair of brown dwarfs or feature an unusually well-stocked Oort cloud. “What splendid asteroid belts Epsilon Eridani has,” one might observe loudly, in the confident tone of a person who never has any trouble finding a seat by themselves on the bus.

In fiction, Procyon is home to L. Sprague de Camp’s Osiris, Larry Niven’s We Made It, and Gordon R. Dickson’s Mara and Kultis, to name just a few planets. Regrettably, Procyon A should never ever have been tagged as “possesses potentially habitable worlds.” Two reasons: solar orbits and Procyon B’s DA classification.

Procyon is a binary star system. The larger star, Procyon A, is a main-sequence white star; its companion, Procyon B, is a faint white dwarf star. The two stars orbit around each other, at a distance that varies between 9 and 21 Astronomical Units (AU).

Procyon A is brighter than the Sun, and its habitable zone may lie at distance between 2 and 4 AU. That is two to four times as far from Procyon A as the Earth is from our Sun.

Procyon B is hilariously dim, but it has a very respectable mass, roughly 60% that of our Sun. If Procyon A were to have a planet, it would be strongly affected by B’s gravitational influence. Perhaps that would put a hypothetical terrestrial world into an eccentric (albeit plot-friendly) orbit…or perhaps it would send a planet careening outside the system entirely.

But of course a hypothetical planet would not be human- or plot-friendly. B is a white dwarf. It may seem like a harmless wee thing3, but its very existence suggests that the whole system has had a tumultuous history. White dwarfs start off as regular medium-mass stars, use up their accessible fusion fuel, expand into red giants, shed a surprisingly large fraction of their mass (B may be less massive than A now but the fact that B and not A is a white dwarf tells us that it used to be far more massive than it is now), and then settle down into a long senility as a slowly-cooling white dwarf.

Buy it Now

None of this would have been good for a terrestrial world. Pre-red giant B would have had an even stronger, less predictable effect on our hypothetical world’s orbit. Even if the world had by some chance survived in a Goldilocks orbit, B would have scorched it.

This makes me sad. Procyon is, as I said, an old friend.

[I’ve thought of a dodge to salvage the notion of a potentially habitable world in the Procyon System. Take a cue from Phobetor and imagine a planet orbiting the white dwarf, rather than orbiting the main(ish) sequence star. We now know that there are worlds orbiting post-stellar remnants. This imaginary world would have to be very close to Procyon B if it is to be warm enough for life, which would mean a fast orbit. It would have a year about 40 hours long. It would be very, very tide-locked and you’d have to terraform it. Not  promising. Still, on the plus side, the planet will be far too tightly
bound to B for A’s mass to perturb it much. Better than nothing—and much better than the clinkers that may orbit A.]

A more reasonable approach might be to abandon Procyon as a bad bet all round and look for a similar system whose history is not quite as apocalyptic.

It’s not Sirius. Everything that is true of Procyon A and B is true for Sirius A and B as well, in spades. Say goodbye to Niven’s Jinx: if Sirius B didn’t flick it into deep space like a bleb of snot, it would have cinderized and evaporated the entire planet.

But…40 Eridani is also comparatively nearby. It is a triple star system, with a K, an M and a DA star. Unlike Procyon, however, B (the white dwarf) and C (the red dwarf) orbit each other 400+ AU from the interesting K class star. Where the presence of nearby Procyon B spells complete annihilation for any world around Procyon A, 40 Eridani B might only have caused a nightmarish apocalypse of sorts. The red giant might have pushed any existing world around A from ice age into a Carnian Pluvial Event, but it would not have gone full Joan of Arc on the planet. The shedding of the red giant’s outer layers might have stripped some of the hypothetical world’s atmosphere…but perhaps not all of it? The planet might have been turned from a volatile rich world into a desert, but life might have survived—it’s the kind of planetary backstory Andre Norton might have used.

 


1: We had Peter Van de Kamp’s claims about planets orbiting Barnard’s Star, Lalande 21185, 61 Cygni, and others but those failed to pan out.

2: With slightly different values for distance and type, but I don’t have any of my outdated texts handy. Also, ha ha, none of the sources I had back then ever mentioned the ages of the various systems, which (as it turns out) matter. Earth, after all, was an uninhabitable armpit for most of its existence, its atmosphere unbreathable by us. The ink is barely dry on Epsilon Indi and Epsilon Eridani. Don’t think Cretaceous Earth: think early Hadean.

3: Unless you know what a Type 1a supernova is.

In the words of Wikipedia editor TexasAndroid, prolific book reviewer and perennial Darwin Award nominee James Davis Nicoll is of “questionable notability.” His work has appeared in Publishers Weekly and Romantic Times as well as on his own websites, James Nicoll Reviewsand Young People Read Old SFF (where he is assisted by editor Karen Lofstrom and web person Adrienne L. Travis). He is surprisingly flammable.

Bash QDB - 962113 [Published articles]

<@realitygaps> english - the php of spoken languages

Google's New Book: The Site Reliability Workbook [Published articles]

 

Google has released a new book: The Site Reliability Workbook — Practical Ways to Implement SRE.

It's the second book in their SRE series. How is it different than the previous Site Reliability Engineering book?

David Rensin, a SRE at Google, says:

It's a whole new book.  It's designed to sit next to the original on the bookshelf and for folks to bounce between them -- moving between principle and practice.

And from the preface:

The purpose of this second SRE book is (a) to add more implementation detail to the principles outlined in the first volume, and (b) to dispel the idea that SRE is implementable only at “Google scale” or in “Google culture.”

The Site Reliability Workbook weighs in at a hefty 508 pages and roughly follows the structure of the first book. It's organized into three different parts: Foundations, Practices, and Processes. There are three appendices: Example SLO Document, Example Error Budget Policy, and Results of Postmortem Analysis.

The table of content is quite detailed, but here are the chapter titles:

  1. How SRE Relates to DevOps.  
  2. Implementing SLOs.
  3. SLO Engineering Case Studies.
  4. Monitoring.
  5. Alerting on SLOs.
  6. Eliminating Toil.
  7. Simplicity.
  8. On-Call.
  9. Incident Response.
  10. Postmortem Culture: Learning from Failure.
  11. Managing Load.
  12. Introducing Non-Abstract Large System Design.
  13. Data Processing Pipelines.
  14. Configuration Design and Best Practices.
  15. Configuration Specifics.
  16. Canarying Releases.
  17. Identifying and Recovering from Overload.
  18. SRE Engagement Model.
  19. SRE: Reaching Beyond Your Walls.
  20. SRE Team Lifecycles.
  21. Organizational Change Management in SRE.

What makes this book a tour de force are all the examples and case studies. You aren't just stuck with high level principles, you're given worked examples that make the principles concrete. That's hard to do and takes a lot of work.

In Chapter 2—Implementing SLOs—there's a detailed example involving the architecture for a mobile phone game. First, you must learn how to think "about how users interact with the system, and what sort of SLIs (Service Level Indicators) would measure the various aspects of a user’s experience." You're then taken through a number of SLIs and how to implement and measure them. Given the SLIs you learn how to calculate SLOs (Service Level Objectives). And once you have the SLO you're shown how to derive the error budget. That's not the end. You have to document the SLO and error budget policy. Then you need reports and dashboards that provide in-time snapshots of the SLO compliance of your services. Is that the end? No. You must continuously improve your SLO targets and learn how to make decisions using that information. And that's not the end either, but for the rest you'll need to read the book.

In Chapter 3—SLO Engineering Case Studies—Evernote and The Home Depot tell the story of their journey into SRE.

In Chapter 4—Monitoring—there are examples of moving information from logs to metrics, improving both logs and metrics, and keeping logs as the data source.

In Chapter 6—Eliminating Toil—there are detailed case studies on Reducing Toil in the Datacenter with Automation and Decommission Filer-Backed Home Directories.

And so it goes through nearly every chapter.

As you can see it's a very detailed and thorough book. The preface modestly contends it's a necessarily limited book, but I'd hate to see how many pages would be in the unlimited version.

Like the first book, the writing is clear, purposeful, and well organized. For a company well known for its influential publications, this is another winner.

Best of all? It's free until August 23rd!

Reddit Breach Highlights Limits of SMS-Based Authentication [Published articles]

Reddit.com today disclosed that a data breach exposed some internal data, as well as email addresses and passwords for some Reddit users. As Web site breaches go, this one doesn’t seem too severe. What’s interesting about the incident is that it showcases once again why relying on mobile text messages (SMS) for two-factor authentication (2FA) can lull companies and end users into a false sense of security.

In a post to Reddit, the social news aggregation platform said it learned on June 19 that between June 14 and 18 an attacker compromised a several employee accounts at its cloud and source code hosting providers.

Reddit said the exposed data included internal source code as well as email addresses and obfuscated passwords for all Reddit users who registered accounts on the site prior to May 2007. The incident also exposed the email addresses of some users who had signed up to receive daily email digests of specific discussion threads.

Of particular note is that although the Reddit employee accounts tied to the breach were protected by SMS-based two-factor authentication, the intruder(s) managed to intercept that second factor.

“Already having our primary access points for code and infrastructure behind strong authentication requiring two factor authentication (2FA), we learned that SMS-based authentication is not nearly as secure as we would hope, and the main attack was via SMS intercept,” Reddit disclosed. “We point this out to encourage everyone here to move to token-based 2FA.”

Reddit didn’t specify how the SMS code was stolen, although it did say the intruders did not hack Reddit employees’ phones directly. Nevertheless, there are a variety of well established ways that attackers can intercept one-time codes sent via text message.

In one common scenario, known as a SIM-swap, the attacker masquerading as the target tricks the target’s mobile provider into tying the customer’s service to a new SIM card that the bad guys control. A SIM card is the tiny, removable chip in a mobile device that allows it to connect to the provider’s network. Customers can request a SIM swap when their existing SIM card has been damaged, or when they are switching to a different phone that requires a SIM card of another size.

Another typical scheme involves mobile number port-out scams, wherein the attacker impersonates a customer and requests that the customer’s mobile number be transferred to another mobile network provider. In both port-out and SIM swap schemes, the victim’s phone service gets shut off and any one-time codes delivered by SMS (or automated phone call) get sent to a device that the attackers control.

APP-BASED AUTHENTICATION

A more secure alternative to SMS involves the use of a mobile app — such as Google Authenticator or Authy — to generate the one-time code that needs to be entered in addition to a password. This method is also sometimes referred to as a “time-based one-time password,” or TOTP. It’s more secure than SMS simply because the attacker in that case would need to steal your mobile device or somehow infect it with malware in order to gain access to that one-time code. More importantly, app-based two-factor removes your mobile provider from the login process entirely.

Fundamentally, two-factor authentication involves combining something you know (the password) with either something you have (a device) or something you are (a biometric component, for example). The core idea behind 2FA is that even if thieves manage to phish or steal your password, they still cannot log in to your account unless they also hack or possess that second factor.

Technically, 2FA via mobile apps and other TOTP-based methods are more accurately described as “two-step authentication” because the second factor is supplied via the same method as the first factor. For example, even though the second factor may be generated by a mobile-based app, that one-time code needs to be entered into the same login page on a Web site along with user’s password — meaning both the password and the one-time code can still be subverted by phishing, man-in-the-middle and credential replay attacks.

SECURITY KEYS

Probably the most secure form of 2FA available involves the use of hardware-based security keys. These inexpensive USB-based devices allow users to complete the login process simply by inserting the device and pressing a button. After a key is enrolled for 2FA at a particular site that supports keys, the user no longer needs to enter their password (unless they try to log in from a new device). The key works without the need for any special software drivers, and the user never has access to the code — so they can’t give it or otherwise leak it to an attacker.

The one limiting factor with security keys is that relatively few Web sites currently allow users to use them. Some of the most popular sites that do accept security keys include Dropbox, Facebook and Github, as well as Google’s various services.

Last week, KrebsOnSecurity reported that Google now requires all of its 85,000+ employees to use security keys for 2FA, and that it has had no confirmed reports of employee account takeovers since the company began requiring them at the beginning of 2017.

The most popular maker of security keys — Yubico — sells the basic model for $20, with more expensive versions that are made to work with mobile devices. The keys are available directly from Yubico, or via Amazon.com. Yubico also includes a running list of sites that currently support keys for authentication.

If you’re interested in migrating to security keys for authentication, it’s a good idea to purchase at least two of these devices. Virtually all sites that I have seen which allow authentication via security keys allow users to enroll multiple keys for authentication, in case one of the keys is lost or misplaced.

I would encourage all readers to pay a visit to twofactorauth.org, and to take full advantage of the most secure 2FA option available for any site you frequent. Unfortunately many sites do not support any kind of 2-factor authentication — let alone methods that go beyond SMS or a one-time code that gets read to you via an automated phone call. In addition, some sites that do support more robust, app- or key-based two-factor authentication still allow customers to receive SMS-based codes as a fallback method.

If the only 2FA options offered by a site you frequent are SMS and/or phone calls, this is still better than simply relying on a password. But it’s high time that popular Web sites of all stripes start giving their users more robust authentication options like TOTP and security keys. Many companies can be nudged in that direction if enough users start demanding it, so consider using any presence and influence you may have on social media platforms to make your voice heard on this important issue.

Someone Used a Deep Learning AI to Perfectly Insert Harrison Ford Into Solo: A Star Wars Story [Published articles]

Casting anyone other than Harrison Ford in the role of Han Solo just feels like sacrilege, but since Ford is now 76 years old, playing a younger version of himself would be all but impossible. Or at least impossible if you rely on the standard Hollywood de-aging tricks like makeup and CG. Artificial intelligence, it…

Read more...

Gen Con interview: Mercedes Lackey [Published articles]

 

One of the featured guests at Gen Con this year was Mercedes Lackey, returning for the second Gen Con in a row after she and her husband Larry Dixon were with Zombie Orpheus Entertainment last year. Unfortunately, Larry Dixon was not able to make it this year after all, due to recovering from a shoulder injury. Mercedes Lackey attended her panels on Thursday; however, Friday morning she had to be hospitalized due to an allergic reaction to paint fumes in her recently renovated hotel room. She had to stay overnight at the hospital, but recovered enough to come back to the convention on Sunday, where I caught up with her for a very brief interview.

Me: This is Chris Meadows here with Mercedes Lackey, who I am very happy to see is all right after she gave us all a scare this weekend.

Mercedes Lackey: It’s alive!

Me: This is the second year in a row you’ve been here with Zombie Orpheus Entertainment. That’s kind of unusual.

M.L.: That’s because my husband Larry Dixon is doing screenwriting for them.

Me: So it’s is continuing for the foreseeable future?

M.L.: Oh yes, he’s definitely on The Gamers screenwriting room. Gamers has been rebooted with the old characters coming back; you can get episode zero called “The Gamers: The Shadow Menace.” You can find it on the Zombie Orpheus website and you can find it on Amazon [Prime Streaming Video].

Me: When I spoke to you last year, you said that your Hunter trilogy was not going to go anywhere because Disney wasn’t interested in continuing it further?

M.L.: This is true. Disney only wanted the trilogy. So, unfortunately, unless I can get them to agree to let me publish independently, that’s probably going be it. Unless suddenly it decides to take flight and become an enomous hit again.

Me: You never know.

M.L.: You never know.

Me: But what else do you have planned for these days.

M.L.: Well, the last book of The Secret World Chronicle is out, Avalanche, and it wraps up all of the plot loose ends and a huge number of reveals. So, that’s out in August. And then in October is The Bartered Brides, which is the next Elemental Masters book. That’s another one with Sherlock Holmes and Nan and Sarah, except Sherlock doesn’t appear in this book because it takes place shortly after the infamous at the Reichenbach Falls. And I’m currently working on another book for Disney, which is called Godmother’s Apprentice—at least it’s called that right now—which is more of a standard fantasy. It’s kind of a Disney Princess for young adults rather than little girls, and I’m outlining the next of the Mags [Valdemar] books. This one is [about] his daughter Abby, who is an artificer.

Me: You already did one thing with godmothers back in your Five Hundred Kingdoms books.

M.L.: Right, this is a little different, this is more classic fairy godmothers.

Me: So, apart from the thing with the hotel, how has the con been for you this year?

M.L.: It’s been lots of fun. I’ve had a great time.

Me: It’s kind of like saying, “Apart from that Mrs. Lincoln…”

M.L.: Exactly!

Me: But do you think you will be back for the next year?

M.L.: I don’t know. We haven’t planned that far ahead.

Me: We’d certainly like to see you.

M.L.: I do know the next convention we’re doing is in the middle of September, it’s Salt Lake Comic Convention. We haven’t been anywhere near there, ever, so it will be a whole new group of fans.

Me: Well, that’s gonna be pretty neat. Have you any further plans for any self published items?

M.L.: No, at this point I have so many contracts to write out that I literally don’t have any time to write anything to self-publish.

Me: I guess it’s better to have too much work than not enough.

M.L.: Oh yeah, we constantly need need to do the mortgage payments still.

Me: Is there anything else you’d like to say before I close it down?

M.L.: Yes, I really really appreciate all the incredible outpouring of concern when I went down. You really know how wonderful the fan community is when there are seven hundred messages on Larry’s Twitter all asking about it.

Me: Well, I think I can speak for all of us fans when I say that I’m really glad that you’re doing well. And I hope we will see you back again here next year.

M.L.: I hope so, too


If you found this post worth reading and want to kick in a buck or two to the author, click here.

TIL - The "Thagomizer", the spiked tail on a stegosaurid dinosaur, didn't have an official name till the cartoonist Gary Larson did a comic about it, named it, and the scientific community just accepted it and started using it too. [Published articles]

Minimal base Docker images compared [Published articles]

Some of you may remember a blog I did about container scanning. The result was that we're considering a move away from Alpine to use a distribution that's both small and has access to a CVE database so that vulnerability scanning is more accurate.

I've spent some time and compared some variations of Redhat, Debian and Ubuntu.

Ubuntu now provides a 30mb compressed image for their latest tag. Similarly Debian produces a stable-slim tag that's around 22mb.

The blog is here for anybody interested. Also a bit of a diversion in the middle related to Redhat that might be worth discussing.

https://kubedex.com/base-images/

Interested to know if anyone else is planning a move away from Alpine and if so what are you switching to.

submitted by /u/stevenacreman to r/docker
[link] [comments]

Dave brought my 10yo nephew up on stage last night in Kansas City and he killed it! [Published articles]

Are Universal Basic Incomes 'A Tool For Our Further Enslavement'? [Published articles]

Douglas Rushkoff, long-time open source advocate (and currently a professor of Digital Economics at the City University of New York, Queens College), is calling Universal Basic Incomes "no gift to the masses, but a tool for our further enslavement." Uber's business plan, like that of so many other digital unicorns, is based on extracting all the value from the markets it enters. This ultimately means squeezing employees, customers, and suppliers alike in the name of continued growth. When people eventually become too poor to continue working as drivers or paying for rides, UBI supplies the required cash infusion for the business to keep operating. When it's looked at the way a software developer would, it's clear that UBI is really little more than a patch to a program that's fundamentally flawed. The real purpose of digital capitalism is to extract value from the economy and deliver it to those at the top. If consumers find a way to retain some of that value for themselves, the thinking goes, you're doing something wrong or "leaving money on the table." Walmart perfected the softer version of this model in the 20th century. Move into a town, undercut the local merchants by selling items below cost, and put everyone else out of business. Then, as sole retailer and sole employer, set the prices and wages you want. So what if your workers have to go on welfare and food stamps. Now, digital companies are accomplishing the same thing, only faster and more completely.... Soon, consumers simply can't consume enough to keep the revenues flowing in. Even the prospect of stockpiling everyone's data, like Facebook or Google do, begins to lose its allure if none of the people behind the data have any money to spend. To the rescue comes UBI. The policy was once thought of as a way of taking extreme poverty off the table. In this new incarnation, however, it merely serves as a way to keep the wealthiest people (and their loyal vassals, the software developers) entrenched at the very top of the economic operating system. Because of course, the cash doled out to citizens by the government will inevitably flow to them.... Under the guise of compassion, UBI really just turns us from stakeholders or even citizens to mere consumers. Once the ability to create or exchange value is stripped from us, all we can do with every consumptive act is deliver more power to people who can finally, without any exaggeration, be called our corporate overlords... if Silicon Valley's UBI fans really wanted to repair the economic operating system, they should be looking not to universal basic income but universal basic assets, first proposed by Institute for the Future's Marina Gorbis... As appealing as it may sound, UBI is nothing more than a way for corporations to increase their power over us, all under the pretense of putting us on the payroll. It's the candy that a creep offers a kid to get into the car or the raise a sleazy employer gives a staff member who they've sexually harassed. It's hush money. Rushkoff's conclusion? "Whether its proponents are cynical or simply naive, UBI is not the patch we need."

Share on Google+

Read more of this story at Slashdot.

FCC resorts to the usual malarkey defending itself against Mozilla lawsuit [Published articles]

Mozilla and other digital advocacy companies filed a lawsuit in August alleging the FCC had unlawfully overturned 2015’s net neutrality rules, by among other things “fundamentally mischaracteriz[ing] how internet access works.” The FCC has filed its official response, and as you might expect it has doubled down on those fundamental mischaracterizations.

The Mozilla suit, which you can read here or embedded at the bottom of this post, was sort of a cluster bomb of allegations striking at the FCC order on technical, legal, and procedural grounds. They aren’t new, revelatory arguments — they’re what net neutrality advocates have been saying for years.

There are at least a dozen separate allegations, but most fall under two general categories.

  1. That the FCC wrongly classifies broadband as an “information service” rather than a “telecommunications service.” There’s a long story behind this that I documented in the Commission Impossible series. The logic on which this determination is based has been refuted by practically every technical authority and really is just plain wrong. This pulls the rug out from numerous justifications for undoing the previous rules and instating new ones.
  2. That by failing to consider consumer complaints or perform adequate studies on the state of the industry, federal protections, and effects of the rules, the FCC’s order is “arbitrary and capricious” and thus cannot be considered to have been lawfully enacted.

The FCC’s responses to these allegations are likewise unsurprising. The bulk of big rulemaking documents like Restoring Internet Freedom isn’t composed of the actual rules but in the justification of those rules. So the FCC took preventative measures in its proposal identifying potential objections (like Mozilla’s) and dismissing them by various means.

These are the arguments against net neutrality and why they’re wrong

That their counter-arguments on the broadband classification are nothing new is in itself a little surprising, though. These very same arguments were rejected by a panel of judges in the DC circuit back in 2015. In fact, recently-appointed Supreme Court Justice Brett Kavanaugh distinguished himself on that very decision by being wrong on every count and receiving an embarrassing intellectual drubbing by his better-informed peer, Judge Srinivasan.

As for the arbitrary and capricious allegation, the FCC merely reiterates that all its decisions were reasonable as justified at the time. Mozilla’s arguments are not given serious consideration; for example, when Mozilla pointed out that thousands of pages of comments had been essentially assumed by the FCC to be irrelevant without reviewing them, the FCC responds that it “reasonably decided not to include largely unverified consumer complaints in the record.”

These statements aren’t the end of the line; there will be more legal wrangling, amicus briefs, public statements, amended filings, and so on before this case is decided. But if you want a good summary of the hard legal arguments against the FCC and a vexing dismissal thereof, these two documents will serve for weekend reading.

The Mozilla suit:

Mozilla v FCC Filing by TechCrunch on Scribd

The FCC’s counter-arguments:

Mozilla v FCC Counterfiling by TechCrunch on Scribd

U.S. Robocall Data [Published articles]

Larry Wall's Very Own Home Page [Published articles]

What quote made you think a different way? [Published articles]

There are Many Problems With Mobile Privacy but the Presidential Alert Isn’t One of Them [Published articles]

On Wednesday, most cell phones in the US received a jarring alert at the same time. This was a test of the Wireless Emergency Alert (WEA) system, also commonly known as the Presidential Alert. This is an unblockable nationwide alert system which is operated by Federal Emergency Management Agency (*not* the President, as the name might suggest) to warn people of a catastrophic event such as a nuclear strike or nationwide terrorist attack. The test appears to have been mostly successful, and having a nationwide emergency alert system certainly doesn’t seem like a bad idea; but Wednesday’s test has also generated concern. One of the most shared tweets came from antivirus founder John McAfee.

Tweet by McAfee claiming that the Presidential Alert is tracking americans through a non-existent E911 chip

While there are legitimate concerns about the misuse of the WEA system and myriad privacy concerns with cellular phones and infrastructure (including the Enhanced 911, or E911, system) the tweet by McAfee gets it wrong.

How the WEA System Works

The Wireless Emergency Alert system is the same system used to send AMBER Alerts, Severe Weather Notifications, and Presidential Alerts to mobile devices. It works by sending an emergency message to every phone provider in the US, which  then push the messages to every cell tower in the affected area. (In the case of a Presidential Alert, the entire country.) The cell towers then broadcast the message to every connected phone. This is a one-way broadcast that will go to every cell phone in the designated area, though in practice not every cell phone will receive the message.

McAfee’s tweet gets two key things wrong about this system: There is no such thing as an E911 chip, and the system does not give “them” the information he claims.  In fact, the Presidential Alert does not have any way to send data about your phone back to the mobile carrier, though your phone is sending data to mobile carriers all the time for other reasons.

Privacy Issues with Enhanced 911

This isn’t to say that there aren’t serious privacy issues with the E911 system. The E911 system was developed by the FCC in the early 2000’s after concerns that the increased use of cellular telephones would make it harder for emergency responders to locate a person in crisis. With a landline, first responders could simply go to the billing location for the phone, but a mobile caller could be miles from their home, even in another state. The E911 standard requires that a mobile device be able to send its location, with a high degree of accuracy, to emergency responders in response to a 911 call. While this is a good idea in the event of an actual crisis, law enforcement agencies have taken advantage of this technology to locate and track people in real time. EFF has argued that this was not the intended use of this system and that such use requires a warrant.

What’s more, the mobile phone system itself has a huge number of privacy issues: from mobile malware which can control your camera and read encrypted data, to Cell-Site Simulators which can pinpoint a phone’s exact location, to the “Upstream” surveillance program exposed by Edward Snowden, to privacy issues in the SS7 system that connects mobile phone networks to each other. There are myriad privacy issues with mobile devices that we should be deeply concerned about, but the Wireless Emergency Alert system is not one of them.

A tweet from Snowden about the "Upstream" surveillance program

There are legitimate concerns about the misuse of the wireless emergency alert system as well. There could be a false alarm issued through the system, sparking unnecessary panic, as happened in Hawaii earlier this year.For many, the idea that a president could use the WEA to push an unblockable message to their phones is deeply disturbing and sparked concerns that the system could be used to spread unblockable propaganda.  Unlike other emergency alerts, the presidential alert can’t be turned off in phone software, by law. Fortunately for us, activating the WEA system is more complicated than say, sending a tweet. To send out a Presidential Alert the president would have to, at the very least, convince someone in charge of the WEA system at FEMA to send such a message, and FEMA staffers may be reluctant to send out a non-emergency message, which could decrease the effectiveness of future emergency alerts. 

As with any new system that is theoretically a good idea, we must remain vigilant that it is not misused. There are many legitimate reasons to be concerned about cellular privacy. It’s important that we keep an eye on the real threats and not get distracted by wild conspiracy theories.

Related Cases: 

Cafe in Providence, Rhode Island Serves Free Coffee To Students Who Provide Personal Data; Participants May Receive Info From Cafe's Corporate Sponsors [Published articles]

An anonymous reader shares an NPR report: Shiru Cafe looks like a regular coffee shop. Inside, machines whir, baristas dispense caffeine and customers hammer away on laptops. But all of the customers are students, and there's a reason for that. At Shiru Cafe, no college ID means no caffeine. "We definitely have some people that walk in off the street that are a little confused and a little taken aback when we can't sell them any coffee," said Sarah Ferris, assistant manager at the Shiru Cafe branch in Providence, R.I., located near Brown University. Ferris will turn away customers if they're not college students or faculty members. The cafe allows professors to pay, but students have something else the shop wants: their personal information. To get the free coffee, university students must give away their names, phone numbers, email addresses and majors, or in Brown's lingo, concentrations. Students also provide dates of birth and professional interests, entering all of the information in an online form. By doing so, the students also open themselves up to receiving information from corporate sponsors who pay the cafe to reach its clientele through logos, apps, digital advertisements on screens in stores and on mobile devices, signs, surveys and even baristas. According to Shiru's website: "We have specially trained staff members who give students additional information about our sponsors while they enjoy their coffee." The source article additionally explores privacy aspects of the business. The cafe, which is owned by Japanese company Enrission, says it shares general, aggregate data such as student majors and expected graduation years.

Share on Google+

Read more of this story at Slashdot.

High-Quality Jurassic Park Stills Are the Ideal Decoration for the Serious Dinosaur Lover [Published articles]

A single frame from a film can often be a work of art. They should be displayed as such.

Read more...

Scientists Accidentally Blow Up Their Lab With Strongest Indoor Magnetic Field Ever [Published articles]

An anonymous reader quotes a report from Motherboard: Earlier this year, researchers at the University of Tokyo accidentally created the strongest controllable magnetic field in history and blew the doors of their lab in the process. As detailed in a paper recently published in the Review of Scientific Instruments, the researchers produced the magnetic field to test the material properties of a new generator system. They were expecting to reach peak magnetic field intensities of around 700 Teslas, but the machine instead produced a peak of 1,200 Teslas. (For the sake of comparison, a refrigerator magnet has about 0.01 Tesla) In both the Japanese and Russian experiments, the magnetic fields were generated using a technique called electromagnetic flux-compression. This technique causes a brief spike in the strength of the magnetic field by rapidly "squeezing" it to a smaller size. [...] Instead of using TNT to generate their magnetic field, the Japanese researchers dumped a massive amount of energy -- 3.2 megajoules -- into the generator to cause a weak magnetic field produced by a small coil to rapidly compress at a speed of about 20,000 miles per hour. This involves feeding 4 million amps of current through the generator, which is several thousand times more than a lightning bolt. When this coil is compressed as small as it will go, it bounces back. This produces a powerful shockwave that destroyed the coil and much of the generator. To protect themselves from the shockwave, the Japanese researchers built an iron cage for the generator. However they only built it to withstand about 700 Teslas, so the shockwave from the 1,200 Teslas ended up blowing out the door to the enclosure. While this is the strongest magnetic filed ever generated in a controlled, indoor environment, the strongest magnetic field produced in history belongs to some Russian researchers who created a 2,800 Tesla magnetic field in 2001.

Share on Google+

Read more of this story at Slashdot.

Benjamin Mako Hill: Shannon’s Ghost [Published articles]

I’m spending the 2018-2019 academic year as a fellow at the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford.

Claude Shannon on a bicycle.

Every CASBS study is labeled with a list of  “ghosts” who previously occupied the study. This year, I’m spending the year in Study 50 where I’m haunted by an incredible cast that includes many people whose scholarship has influenced and inspired me.

The top part of the list of ghosts in Study #50 at CASBS.

Foremost among this group is Study 50’s third occupant: Claude Shannon

At 21 years old, Shannon’s masters thesis (sometimes cited as the most important masters thesis in history) proved that electrical circuits could encode any relationship expressible in Boolean logic and opened the door to digital computing. Incredibly, this is almost never cited as Shannon’s most important contribution. That came in 1948 when he published a paper titled A Mathematical Theory of Communication which effectively created the field of information theory. Less than a decade after its publication, Aleksandr Khinchin (the mathematician behind my favorite mathematical constant) described the paper saying:

Rarely does it happen in mathematics that a new discipline achieves the character of a mature and developed scientific theory in the first investigation devoted to it…So it was with information theory after the work of Shannon.

As someone whose own research is seeking to advance computation and mathematical study of communication, I find it incredibly propitious to be sharing a study with Shannon.

Although I teach in a communication department, I know Shannon from my background in computing. I’ve always found it curious that, despite the fact Shannon’s 1948 paper is almost certainly the most important single thing ever published with the word “communication” in its title, Shannon is rarely taught in communication curricula is sometimes completely unknown to communication scholars.

In this regard, I’ve thought a lot about this passage in Robert’s Craig’s  influential article “Communication Theory as a Field” which argued:

In establishing itself under the banner of communication, the discipline staked an academic claim to the entire field of communication theory and research—a very big claim indeed, since communication had already been widely studied and theorized. Peters writes that communication research became “an intellectual Taiwan-claiming to be all of China when, in fact, it was isolated on a small island” (p. 545). Perhaps the most egregious case involved Shannon’s mathematical theory of information (Shannon & Weaver, 1948), which communication scholars touted as evidence of their field’s potential scientific status even though they had nothing whatever to do with creating it, often poorly understood it, and seldom found any real use for it in their research.

In preparation for moving into Study 50, I read a new biography of Shannon by Jimmy Soni and Rob Goodman and was excited to find that Craig—although accurately describing many communication scholars’ lack of familiarity—almost certainly understated the importance of Shannon to communication scholarship.

For example, the book form of Shannon’s 1948 article was published by University Illinois on the urging of and editorial supervision of Wilbur Schramm (one of the founders of modern mass communication scholarship) who was a major proponent of Shannon’s work. Everett Rogers (another giant in communication) devotes a chapter of his “History of Communication Studies”² to Shannon and to tracing his impact in communication. Both Schramm and Rogers built on Shannon in parts of their own work. Shannon has had an enormous impact, it turns out, in several subareas of communication research (e.g., attempts to model communication processes).

Although I find these connections exciting. My own research—like most of the rest of communication—is far from the substance of technical communication processes at the center of Shannon’s own work. In this sense, it can be a challenge to explain to my colleagues in communication—and to my fellow CASBS fellows—why I’m so excited to be sharing a space with Shannon this year.

Upon reflection, I think it boils down to two reasons:

  1. Shannon’s work is both mathematically beautiful and incredibly useful. His seminal 1948 article points to concrete ways that his theory can be useful in communication engineering including in compression, error correcting codes, and cryptography. Shannon’s focus on research that pushes forward the most basic type of basic research while remaining dedicated to developing solutions to real problems is a rare trait that I want to feature in my own scholarship.
  2. Shannon was incredibly playful. Shannon played games, juggled constantly, and was always seeking to teach others to do so. He tinkered, rode unicycles, built a flame-throwing trumpet, and so on. With Marvin Minsky, he invented the “ultimate machine”—a machine that’s only function is to turn itself off—which he kept on his desk.
    A version of the Shannon’s “ultimate machine” that is sitting on my desk at CASBS.

I have no misapprehension that I will accomplish anything like Shannon’s greatest intellectual achievements during my year at CASBS. I do hope to be inspired by Shannon’s creativity, focus on impact, and playfulness. In my own little ways, I hope to build something at CASBS that will advance mathematical and computational theory in communication in ways that Shannon might have appreciated.


  1. Incredibly, the year that Shannon was in Study 50, his neighbor in Study 51 was Milton Friedman. Two thoughts: (i) Can you imagine?! (ii) I definitely chose the right study!
  2. Rogers book was written, I found out, during his own stint at CASBS. Alas, it was not written in Study 50.

Happy 10th anniversary, Android [Published articles]

It’s been 10 years since Google took the wraps off the G1, the first Android phone. Since that time the OS has grown from buggy, nerdy iPhone alternative to arguably the most popular (or at least populous) computing platform in the world. But it sure as heck didn’t get there without hitting a few bumps along the road.

Join us for a brief retrospective on the last decade of Android devices: the good, the bad, and the Nexus Q.

HTC G1 (2008)

This is the one that started it all, and I have a soft spot in my heart for the old thing. Also known as the HTC Dream — this was back when we had an HTC, you see — the G1 was about as inauspicious a debut as you can imagine. Its full keyboard, trackball, slightly janky slide-up screen (crooked even in official photos), and considerable girth marked it from the outset as a phone only a real geek could love. Compared to the iPhone, it was like a poorly dressed whale.

But in time its half-baked software matured and its idiosyncrasies became apparent for the smart touches they were. To this day I occasionally long for a trackball or full keyboard, and while the G1 wasn’t pretty, it was tough as hell.

Moto Droid (2009)

Of course, most people didn’t give Android a second look until Moto came out with the Droid, a slicker, thinner device from the maker of the famed RAZR. In retrospect, the Droid wasn’t that much better or different than the G1, but it was thinner, had a better screen, and had the benefit of an enormous marketing push from Motorola and Verizon. (Disclosure: Verizon owns Oath, which owns TechCrunch, but this doesn’t affect our coverage in any way.)

For many, the Droid and its immediate descendants were the first Android phones they had — something new and interesting that blew the likes of Palm out of the water, but also happened to be a lot cheaper than an iPhone.

HTC/Google Nexus One (2010)

This was the fruit of the continued collaboration between Google and HTC, and the first phone Google branded and sold itself. The Nexus One was meant to be the slick, high-quality device that would finally compete toe-to-toe with the iPhone. It ditched the keyboard, got a cool new OLED screen, and had a lovely smooth design. Unfortunately it ran into two problems.

First, the Android ecosystem was beginning to get crowded. People had lots of choices and could pick up phones for cheap that would do the basics. Why lay the cash out for a fancy new one? And second, Apple would shortly release the iPhone 4, which — and I was an Android fanboy at the time — objectively blew the Nexus One and everything else out of the water. Apple had brought a gun to a knife fight.

HTC Evo 4G (2010)

Another HTC? Well, this was prime time for the now-defunct company. They were taking risks no one else would, and the Evo 4G was no exception. It was, for the time, huge: the iPhone had a 3.5-inch screen, and most Android devices weren’t much bigger, if they weren’t smaller.

HTC is gone

The Evo 4G somehow survived our criticism (our alarm now seems extremely quaint, given the size of the average phone now) and was a reasonably popular phone, but ultimately is notable not for breaking sales records but breaking the seal on the idea that a phone could be big and still make sense. (Honorable mention goes to the Droid X.)

Samsung Galaxy S (2010)

Samsung’s big debut made a hell of a splash, with custom versions of the phone appearing in the stores of practically every carrier, each with their own name and design: the AT&T Captivate, T-Mobile Vibrant, Verizon Fascinate, and Sprint Epic 4G. As if the Android lineup wasn’t confusing enough already at the time!

Though the S was a solid phone, it wasn’t without its flaws, and the iPhone 4 made for very tough competition. But strong sales reinforced Samsung’s commitment to the platform, and the Galaxy series is still going strong today.

Motorola Xoom (2011)

This was an era in which Android devices were responding to Apple, and not vice versa as we find today. So it’s no surprise that hot on the heels of the original iPad we found Google pushing a tablet-focused version of Android with its partner Motorola, which volunteered to be the guinea pig with its short-lived Xoom tablet.

Although there are still Android tablets on sale today, the Xoom represented a dead end in development — an attempt to carve a piece out of a market Apple had essentially invented and soon dominated. Android tablets from Motorola, HTC, Samsung and others were rarely anything more than adequate, though they sold well enough for a while. This illustrated the impossibility of “leading from behind” and prompted device makers to specialize rather than participate in a commodity hardware melee.

Amazon Kindle Fire (2011)

And who better to illustrate than Amazon? Its contribution to the Android world was the Fire series of tablets, which differentiated themselves from the rest by being extremely cheap and directly focused on consuming digital media. Just $200 at launch and far less later, the Fire devices catered to the regular Amazon customer whose kids were pestering them about getting a tablet on which to play Fruit Ninja or Angry Birds, but who didn’t want to shell out for an iPad.

Turns out this was a wise strategy, and of course one Amazon was uniquely positioned to do with its huge presence in online retail and the ability to subsidize the price out of the reach of competition. Fire tablets were never particularly good, but they were good enough, and for the price you paid, that was kind of a miracle.

Xperia Play (2011)

Sony has always had a hard time with Android. Its Xperia line of phones for years were considered competent — I owned a few myself — and arguably industry-leading in the camera department. But no one bought them. And the one they bought the least of, or at least proportional to the hype it got, has to be the Xperia Play. This thing was supposed to be a mobile gaming platform, and the idea of a slide-out keyboard is great — but the whole thing basically cratered.

What Sony had illustrated was that you couldn’t just piggyback on the popularity and diversity of Android and launch whatever the hell you wanted. Phones didn’t sell themselves, and although the idea of playing Playstation games on your phone might have sounded cool to a few nerds, it was never going to be enough to make it a million-seller. And increasingly that’s what phones needed to be.

Samsung Galaxy Note (2012)

As a sort of natural climax to the swelling phone trend, Samsung went all out with the first true “phablet,” and despite groans of protest the phone not only sold well but became a staple of the Galaxy series. In fact, it wouldn’t be long before Apple would follow on and produce a Plus-sized phone of its own.

The Note also represented a step towards using a phone for serious productivity, not just everyday smartphone stuff. It wasn’t entirely successful — Android just wasn’t ready to be highly productive — but in retrospect it was forward thinking of Samsung to make a go at it and begin to establish productivity as a core competence of the Galaxy series.

Google Nexus Q (2012)

This abortive effort by Google to spread Android out into a platform was part of a number of ill-considered choices at the time. No one really knew, apparently at Google or anywhere elsewhere in the world, what this thing was supposed to do. I still don’t. As we wrote at the time:

Here’s the problem with the Nexus Q:  it’s a stunningly beautiful piece of hardware that’s being let down by the software that’s supposed to control it.

It was made, or rather nearly made in the USA, though, so it had that going for it.

HTC First — “The Facebook Phone” (2013)

The First got dealt a bad hand. The phone itself was a lovely piece of hardware with an understated design and bold colors that stuck out. But its default launcher, the doomed Facebook Home, was hopelessly bad.

How bad? Announced in April, discontinued in May. I remember visiting an AT&T store during that brief period and even then the staff had been instructed in how to disable Facebook’s launcher and reveal the perfectly good phone beneath. The good news was that there were so few of these phones sold new that the entire stock started selling for peanuts on Ebay and the like. I bought two and used them for my early experiments in ROMs. No regrets.

HTC One/M8 (2014)

This was the beginning of the end for HTC, but their last few years saw them update their design language to something that actually rivaled Apple. The One and its successors were good phones, though HTC oversold the “Ultrapixel” camera, which turned out to not be that good, let alone iPhone-beating.

As Samsung increasingly dominated, Sony plugged away, and LG and Chinese companies increasingly entered the fray, HTC was under assault and even a solid phone series like the One couldn’t compete. 2014 was a transition period with old manufacturers dying out and the dominant ones taking over, eventually leading to the market we have today.

Google/LG Nexus 5X and Huawei 6P (2015)

This was the line that brought Google into the hardware race in earnest. After the bungled Nexus Q launch, Google needed to come out swinging, and they did that by marrying their more pedestrian hardware with some software that truly zinged. Android 5 was a dream to use, Marshmallow had features that we loved … and the phones became objects that we adored.

We called the 6P “the crown jewel of Android devices”. This was when Google took its phones to the next level and never looked back.

Google Pixel (2016)

If the Nexus was, in earnest, the starting gun for Google’s entry into the hardware race, the Pixel line could be its victory lap. It’s an honest-to-god competitor to the Apple phone.

Gone are the days when Google is playing catch-up on features to Apple, instead, Google’s a contender in its own right. The phone’s camera is amazing. The software works relatively seamlessly (bring back guest mode!), and phone’s size and power are everything anyone could ask for. The sticker price, like Apple’s newest iPhones, is still a bit of a shock, but this phone is the teleological endpoint in the Android quest to rival its famous, fruitful, contender.

The rise and fall of the Essential phone

In 2017 Andy Rubin, the creator of Android, debuted the first fruits of his new hardware startup studio, Digital Playground, with the launch of Essential (and its first phone). The company had raised $300 million to bring the phone to market, and — as the first hardware device to come to market from Android’s creator — it was being heralded as the next new thing in hardware.

Here at TechCrunch, the phone received mixed reviews. Some on staff hailed the phone as the achievement of Essential’s stated vision — to create a “lovemark” for Android smartphones, while others on staff found the device… inessential.

Ultimately, the market seemed to agree. Four months ago plans for a second Essential phone were put on hold, while the company explored a sale and pursued other projects. There’s been little update since.

A Cambrian explosion in hardware

In the ten years since its launch, Android has become the most widely used operating system for hardware. Some version of its software can be found in roughly 2.3 billion devices around the world and its powering a technology revolution in countries like India and China — where mobile operating systems and access are the default. As it enters its second decade, there’s no sign that anything is going to slow its growth (or dominance) as the operating system for much of the world.

Let’s see what the next ten years bring.

Perhaps the greatest timelapse ever taken. 4 years of an exploding star. [Published articles]

submitted by /u/crgnxn to r/nasa
[comments]

Here you see the first prototype for the Roomba [Published articles]

Flying through the clouds [Published articles]

A Japanese spacecraft just threw two small rovers at an asteroid [Published articles]

A Japanese spacecraft just threw two small rovers at an asteroid submitted by /u/axoox to r/space
[link] [comments]

Scientists Find 'Super-Earth' In Star System From 'Star Trek' [Published articles]

In a wonderful example of truth validating fiction, the star system imagined as the location of Vulcan, Spock's home world in Star Trek, has a planet orbiting it in real life. From a report: A team of scientists spotted the exoplanet, which is about twice the size of Earth, as part of the Dharma Planet Survey (DPS), led by University of Florida astronomer Jian Ge. It orbits HD 26965, more popularly known as 40 Eridani, a triple star system 16 light years away from the Sun. Made up of a Sun-scale orange dwarf (Eridani A), a white dwarf (Eridani B), and a red dwarf (Eridani C), this system was selected to be "Vulcan's Sun" after Star Trek creator Gene Roddenberry consulted with astronomers Sallie Baliunas, Robert Donahue, and George Nassiopoulos about the best location for the fictional planet. "An intelligent civilization could have evolved over the aeons on a planet circling 40 Eridani," Roddenberry and the astronomers suggested in a 1991 letter to the editor published in Sky & Telescope. The three stars "would gleam brilliantly in the Vulcan sky," they added. The real-life exoplanet, known as HD 26965b, is especially tantalizing because it orbits just within the habitable zone of its star, meaning that it is theoretically possible that liquid water -- the key ingredient for life as we know it -- could exist on its surface.

Share on Google+

Read more of this story at Slashdot.

Evernote just slashed 54 jobs, or 15 percent of its workforce [Published articles]

Research Proving People Don't RTFM, Resent 'Over-Featured' Products, Wins Ig Nobel Prize [Published articles]

An anonymous reader writes: Thursday the humor magazine Annals of Improbable Research held their 28th annual ceremony recognizing the real (but unusual) scientific research papers "that make people laugh, then think." And winning this year's coveted Literature prize was a paper titled "Life Is Too Short to RTFM: How Users Relate to Documentation and Excess Features in Consumer Products," which concluded that most people really, truly don't read the manual, "and most do not use all the features of the products that they own and use regularly..." "Over-featuring and being forced to consult manuals also appears to cause negative emotional experiences." Another team measured "the frequency, motivation, and effects of shouting and cursing while driving an automobile," which won them the Ig Nobel Peace Prize. Other topics of research included self-colonoscopies, removing kidney stones with roller coasters, and (theoretical) cannibalism. "Acceptance speeches are limited to 60 seconds," reports Ars Technica, "strictly enforced by an eight-year-old girl nicknamed 'Miss Sweetie-Poo,' who will interrupt those who exceed the time limit by repeating, 'Please stop. I'm bored.' Until they stop." You can watch the whole wacky ceremony on YouTube. The awards are presented by actual Nobel Prize laureates -- and at least one past winner of an Ig Nobel Prize later went on to win an actual Nobel Prize.

Share on Google+

Read more of this story at Slashdot.

Snapshot from the heroic era of mobile computing [Published articles]

MJ Carlson calls this photo from a 1980s computer science textbook "the most glorious stock photo of all time." She is correct.

A Solar Filament Erupts [Published articles]

For Decades, Some of the Atomic Matter in the Universe Had Not Been Located. Recent Papers Reveal Where It Has Been Hiding [Published articles]

In a series of three recent papers, astronomers have identified the final chunks of all the ordinary matter in the universe. From a report: And despite the fact that it took so long to identify it all, researchers spotted it right where they had expected it to be all along: in extensive tendrils of hot gas that span the otherwise empty chasms between galaxies, more properly known as the warm-hot intergalactic medium, or WHIM. Early indications that there might be extensive spans of effectively invisible gas between galaxies came from computer simulations done in 1998. "We wanted to see what was happening to all the gas in the universe," said Jeremiah Ostriker, a cosmologist at Princeton University who constructed one of those simulations along with his colleague Renyue Cen. The two ran simulations of gas movements in the universe acted on by gravity, light, supernova explosions and all the forces that move matter in space. "We concluded that the gas will accumulate in filaments that should be detectable," he said. Except they weren't -- not yet. "It was clear from the early days of cosmological simulations that many of the baryons would be in a hot, diffuse form -- not in galaxies," said Ian McCarthy, an astrophysicist at Liverpool John Moores University. Astronomers expected these hot baryons to conform to a cosmic superstructure, one made of invisible dark matter, that spanned the immense voids between galaxies. The gravitational force of the dark matter would pull gas toward it and heat the gas up to millions of degrees. Unfortunately, hot, diffuse gas is extremely difficult to find. To spot the hidden filaments, two independent teams of researchers searched for precise distortions in the CMB, the afterglow of the Big Bang. As that light from the early universe streams across the cosmos, it can be affected by the regions that it's passing through. In particular, the electrons in hot, ionized gas (such as the WHIM) should interact with photons from the CMB in a way that imparts some additional energy to those photons. The CMB's spectrum should get distorted. Unfortunately the best maps of the CMB (provided by the Planck satellite) showed no such distortions. Either the gas wasn't there, or the effect was too subtle to show up. But the two teams of researchers were determined to make them visible. From increasingly detailed computer simulations of the universe, they knew that gas should stretch between massive galaxies like cobwebs across a windowsill. Planck wasn't able to see the gas between any single pair of galaxies. So the researchers figured out a way to multiply the faint signal by a million.

Share on Google+

Read more of this story at Slashdot.

Linus Torvalds apologizes for his behavior, takes time off [Published articles]

The Explosive Problem With Recycling Phones, Tablets and Other Gadgets: They Literally Catch Fire. [Published articles]

What happens to gadgets when you're done with them? Too often, they explode. From a report: Around the world, garbage trucks and recycling centers are going up in flames. The root of the problem: volatile lithium-ion batteries sealed inside our favorite electronics from Apple, Samsung, Microsoft and more. They're not only dangerous but also difficult to take apart -- making e-waste less profitable, and contributing to a growing recycling crisis. These days, rechargeable lithium-ion batteries are in smartphones, tablets, laptops, ear buds, toys, power tools, scooters, hoverboards and e-cigarettes. For all their benefits at making our devices slim, powerful and easy to recharge, lithium-ion batteries have some big costs. They contain Cobalt, often mined in inhumane circumstances in places like the Congo. And when crushed, punctured, ripped or dropped, lithium-ion batteries can produce what the industry euphemistically calls a "thermal event." It happens because these batteries short circuit when the super-thin separator between their positive and negative parts gets breached. Old devices end up in trouble when we throw them in the trash, stick them in the recycling bin, or even responsibly bring them to an e-waste center. There isn't official data on these fires, but the anecdotal evidence is stark. Since the spring of 2018 alone, batteries have been suspected as the cause of recycling fires in New York, Arizona, Florida, Wisconsin, Indiana, Idaho, Scotland, Australia and New Zealand. In California, a recent survey of waste management facilities found 83 percent had at least one fire over the last two years, of which 40 percent were caused by lithium-ion batteries.

Share on Google+

Read more of this story at Slashdot.

I’m NASA astronaut Scott Tingle. Ask me anything about adjusting to being back on Earth after my first spaceflight! [Published articles]

Earlier this summer, NASA astronaut Scott Tingle returned to Earth after spending 168 days living and working in low-Earth orbit aboard the International Space Station. During a six-month mission, Tingle and his crewmates completed hundreds of experiments, welcomed four cargo spacecraft delivering several tons of supplies and experiments, and performed spacewalks. To document the happenings aboard NASA’s orbiting laboratory, Tingle kept a journal that provides his real-time reflections about his first spaceflight, including this Captain Log that mentions the five things he will miss about being in space. Starting at 3 p.m. EDT on Sept. 12, you can ask him anything about adjusting to being back on our home planet!

Proof:

What questions would you ask an astronaut after their first spaceflight?

Join @Astro_Maker for a @reddit_AMA on Wednesday, Sept. 12 at 3pm ET as he takes your questions about adjusting to being back on Earth after spending six months living & working on the @Space_Station. pic.twitter.com/L9gLQdcwwW

— NASA (@NASA) September 10, 2018

https://twitter.com/NASA/status/1039228526619189251

Thanks for joining today's AMA! I'm signing off, but appreciate all the fun questions!

submitted by /u/nasa to r/IAmA
[link] [comments]

This is the world's oldest known drawing [Published articles]

Around 73,000 years ago, humans used a chunk of pigment to draw a pattern on a rock in a South African cave. The recently discovered fragment of the rock is now considered to be the oldest known drawing in history. From Science News:

The faded pattern consists of six upward-oriented lines crossed at an angle by three slightly curved lines, the researchers report online September 12 in Nature. Microscopic and chemical analyses showed that the lines were composed of a reddish, earthy pigment known as ocher.

The lines end abruptly at the rock’s edges, indicating that a larger and possibly more complex version of the drawing originally appeared on a bigger stone, the researchers say. Tiny pigment particles dotted the rock’s drawing surface, which had been ground smooth. Henshilwood suspects the chunk of rock was part of a large grinding stone on which people scraped pieces of pigment into crayonlike shapes.

Crosshatched designs similar to the drawing have been found engraved on shells at the site, Henshilwood says. So the patterns may have held some sort of meaning for their makers. But it’s hard to know whether the crossed lines represent an abstract idea or a real-life concern.

International Space Station almost straight down view of eye of Florence [Published articles]

9/11/18 @ 7:01 PM [Published articles]

Riderless BMW R1200GS eerily makes its way around a test track [Published articles]

The self-driving BMW makes its way around the track

Yamaha's Motobot is not alone, it seems. Behind closed doors, BMW has also been working on autonomous motorcycle technology for the last couple of years. And yesterday, BMW Motorrad released footage of a self-driving R1200GS negotiating its own way around a test track.

.. Continue Reading Riderless BMW R1200GS eerily makes its way around a test track

Category: Motorcycles

Tags:

New Surveillance Court Orders Show That Even Judges Have Difficulty Understanding and Limiting Government Spying [Published articles]

In the United States, a secret federal surveillance court approves some of the government’s most enormous, opaque spying programs. It is near-impossible for the public to learn details about these programs, but, as it turns out, even the court has trouble, too. 

According to new opinions obtained by EFF last month, the Foreign Intelligence Surveillance Court (FISC) struggled to get full accounts of the government’s misuse of its spying powers for years. After learning about the misuse, the court also struggled to rein it in.

In a trio of opinions, a judge on the FISC raised questions about unauthorized surveillance and potential misuse of a request he had previously granted. In those cases, the secrecy inherent in the proceedings and the government’s obfuscation of its activities made it difficult for the court to grasp the scope of the problems and to prevent them from happening again.

The opinions were part of a larger, heavily redacted set—31 in total—released to EFF in late August as part of a Freedom of Information Act lawsuit we filed in 2016 seeking all significant FISC opinions. The government has released 73 FISC opinions to EFF in response to the suit, though it is continuing to completely withhold another six. We are fighting the government’s secrecy in court and hope to get the last opinions disclosed soon. You can read the newly released opinions here. To read the previous opinions released in the case, click here, here, and here.

Although many of the newly released opinions appear to be decisions approving surveillance and searches of particular individuals, several raise questions about how well equipped FISC judges are to protect individuals’ statutory and constitutional rights when the government is less than candid with the court, underscoring EFF’s concerns with the FISC’s ability to safeguard individual privacy and free expression.

Court Frustrated by Government’s “Chronic Tendency” to Not Disclose the Full Scope of Its Surveillance

An opinion written by then-FISC Judge Thomas F. Hogan shows that even the judges approving foreign intelligence surveillance on specific targets have difficulty understanding whether the NSA is complying with its orders, much less the Constitution.

The opinion, the date of which is redacted, orders the deletion of materials the NSA collected without court authorization. The opinion recounts how after the court learned that the NSA had exceeded an earlier issued surveillance order—resulting in surveillance it was not authorized to conduct—the government argued that it had not actually engaged in unauthorized surveillance. Instead, the government argued that it had only violated “minimization procedures,” which are restrictions on the use of the material, not the collection of it.

Judge Hogan, who served on the FISC from 2009-16 and was its chief judge from 2014-16, expressed frustration both with the government’s argument and with its lack of candor, as the court believed officials had previously acknowledged that the surveillance was unauthorized. The opinion then describes how the surveillance failed to comply with several provisions of the Foreign Intelligence Surveillance Act (FISA) in collecting the intelligence. Although the redactions make it difficult to know exactly which FISA provisions the government did not comply with, the statue requires the government to identify a specific target for surveillance and has to show some proof that the facilities being surveilled were used by a foreign power or the agent of one.

As a result, the court ruled that the surveillance was unauthorized. It went on to note that the government’s failure to meet FISA’s requirements also inhibited the court’s ability to do its job, writing that “the Court was deprived of an adequate understanding of the facts known to the NSA and, even if the government were correct that acquisition [redacted] was authorized, a clear and express record of that authorization is lacking.”

The opinion goes on to note that the government’s conduct provided additional reasons to rule that the surveillance was unauthorized. It wrote:

Moreover, the government’s failures in this case are not isolated ones. The government has exhibited a chronic tendency to mis-describe the actual scope of NSA acquisitions in its submissions to this Court. These inaccuracies have previously contributed to unauthorized electronic surveillance and other forms of statutory and constitutional deficiency.

FISC Judge Frustrated by Government’s Years-Long Failure to Disclose the Scope of Its Surveillance

In another order, Judge Hogan required the government to answer a series of questions after it appeared that the NSA’s surveillance activities went beyond what the court authorized. The order shows that, though the FISC approved years-long surveillance, government officials knowingly collected information about individuals that the court never approved.

The court expressed concern that the “government has not yet provided a full account of non-compliance in this case.” Although the particular concerns the court had with the government are redacted, the court appeared frustrated by the fact that it had been kept in the dark for so long:

It is troubling that, for many years, NSA failed to disclose the actual scope of its surveillance, with the result that it lacked authorization for some of the surveillance that it conducted. It is at least troubling that, once the NSA and the Department of Justice had finally recognized that unauthorized surveillance was being conducted, they failed to take prompt measures to discontinue the surveillance, or even to obtain prospective authorization for the already-ongoing collection.

As a result, the court ordered the government to respond to several questions: How and why was the surveillance allowed to continue after officials realized it was likely unauthorized? What steps were being taken to prevent something like it from happening again? What steps were officials taking to identify the information the government obtained through the unauthorized surveillance?

The court wrote that it would examine the government’s responses “and determine whether a hearing is required to complete the record on these issues.”

Court Concerned By FBI’s Use of Ambiguity in Order to Conduct Unauthorized Surveillance

In another order with its date redacted, Judge Hogan describes a case in which the FBI used some ambiguous language in an earlier order to conduct surveillance that the court did not authorize.

Although the specifics of the incident are unclear, it appears as though the FISC had previously authorized surveillance of a particular target and identified certain communications providers—such as those that provide email, phone, or messaging services—in the order that would be surveilled. The FBI later informed the court that it had engaged in “roving electronic surveillance” and targeted other communications providers. The court was concerned that the roving surveillance “may have exceeded the scope of the authorization reflected” in the earlier order.

Typically, FISA requires that the government identify the “facilities or places” used by a target that it will surveil. However, the law contains a provision that allows the government to engage in “roving electronic surveillance,” which is when the court allows the government to direct surveillance at unspecified communications providers or others that may help follow a target who switches services.

To get an order granting it authority to engage in roving electronic surveillance, the government has to show with specific facts that the surveillance target’s actions may thwart its ability to identify the service or facility the target uses to communicate. For example, the target may frequently change phone numbers or email accounts, making it difficult for the government to identify a specific communications provider.

The problem in this particular case, according to the court, was that the FBI didn’t seek authority to engage in roving electronic surveillance. “The Court does not doubt that it could have authorized” roving electronic surveillance, it wrote. “But the government made no similar request in the above-captioned docket.” Moreover, the government never provided facts that established the target may thwart their ability to identify the service provider.

Although the court was concerned with the government’s unauthorized surveillance, it acknowledged that perhaps its order was not clear and that it “sees no indication of bad faith on the part of the agents or attorneys involved.”

Other FISC decisions authorize various surveillance and searches

 The other opinions released to EFF detail a variety of other orders and opinions issued by the court authorizing various forms of surveillance. Because many are heavily redacted, it is difficult to know precisely what the concern. For example:

  • One opinion explains the FISC’s reasoning for authorizing an order to install a pen register/trap and trace device—which allows for the collection of communications’ metadata—and allow the government to acquire business records. The court cites the Supreme Court’s 1978 decision in Smith v. Maryland to rule that the surveillance at issue does not violate the Fourth Amendment.

  • Another opinion concerns an issue that other, previously disclosed FISC opinions have also wrestled with: the government’s aggressive interpretation of FISA and similar laws that authorize phone call metadata collection that can sometimes also capture the content of communications. The government asked to be able to record the contents of the communications it captured, though it said it would not use those contents in its investigations unless there was an emergency. The court ordered the government to submit a report explaining how it was ensuring that it did not make use of any contents of communications it had recorded.

  • Several other opinions, including this one, authorize electronic surveillance of specific targets along with approving physical searches of property.

  • In another case the court authorized a search warrant to obtain “foreign intelligence information.” The warrant authorized the government to enter the property without consent of the owner or resident, though it also ordered that the search “shall be conducted with the minimum physical intrusion necessary to obtain the information being sought.”

Obtaining these FISC opinions is extraordinarily important, both for government transparency and for understanding how the nation’s intelligence agencies have gone beyond what even the secret surveillance court has authorized.

Having successfully pried the majority of these opinions away from the government’s multi-layered regime of secrecy, we are all the more hopeful to receive the rest.

You can review the full set of documents here.

Related Cases: 

In a Few Days, Credit Freezes Will Be Fee-Free [Published articles]

Later this month, all of the three major consumer credit bureaus will be required to offer free credit freezes to all Americans and their dependents. Maybe you’ve been holding off freezing your credit file because your home state currently charges a fee for placing or thawing a credit freeze, or because you believe it’s just not worth the hassle. If that accurately describes your views on the matter, this post may well change your mind.

A credit freeze — also known as a “security freeze” — restricts access to your credit file, making it far more difficult for identity thieves to open new accounts in your name.

Currently, many states allow the big three bureaus — Equifax, Experian and TransUnion — to charge a fee for placing or lifting a security freeze. But thanks to a federal law enacted earlier this year, after Sept. 21, 2018 it will be free to freeze and unfreeze your credit file and those of your children or dependents throughout the United States.

KrebsOnSecurity has for many years urged readers to freeze their files with the big three bureaus, as well as with a distant fourth — Innovis — and the NCTUE, an Equifax-operated credit checking clearinghouse relied upon by most of the major mobile phone providers.

There are dozens of private companies that specialize in providing consumer credit reports and scores to specific industries, including real estate brokers, landlords, insurers, debt buyers, employers, banks, casinos and retail stores. A handy PDF produced earlier this year by the Consumer Financial Protection Bureau (CFPB) lists all of the known entities that maintain, sell or share credit data on U.S. citizens.

The CFPB’s document includes links to Web sites for 46 different consumer credit reporting entities, along with information about your legal rights to obtain data in your reports and dispute suspected inaccuracies with the companies as needed. My guess is the vast majority of Americans have never heard of most of these companies.

Via numerous front-end Web sites, each of these mini credit bureaus serve thousands or tens of thousands of people who work in the above mentioned industries and who have the ability to pull credit and other personal data on Americans. In many cases, online access to look up data through these companies is secured by nothing more than a username and password that can be stolen or phished by cybercrooks and abused to pull privileged information on consumers.

In other cases, it’s trivial for anyone to sign up for these services. For example, how do companies that provide background screening and credit report data to landlords decide who can sign up as a landlord? Answer: Anyone can be a landlord (or pretend to be one).

SCORE ONE FOR FREEZES

The truly scary part? Access to some of these credit lookup services is supposed to be secured behind a login page, but often isn’t. Consider the service pictured below, which for $44 will let anyone look up the credit score of any American who hasn’t already frozen their credit files with the big three. Worse yet, you don’t even need to have accurate information on a target — such as their Social Security number or current address.

KrebsOnSecurity was made aware of this particular portal by Alex Holden, CEO of Milwaukee, Wisc.-based cybersecurity firm Hold Security LLC [full disclosure: This author is listed as an adviser to Hold Security, however this is and always has been a volunteer role for which I have not been compensated].

Holden’s wife Lisa is a mortgage broker, and as such she has access to a more full-featured version of the above-pictured consumer data lookup service (among others) for the purposes of helping clients determine a range of mortgage rates available. Mrs. Holden said the version of this service that she has access to will return accurate, current and complete credit file information on consumers even if one enters a made-up SSN and old address on an individual who hasn’t yet frozen their credit files with the big three.

“I’ve noticed in the past when I do a hard pull on someone’s credit report and the buyer gave me the wrong SSN or transposed some digits, not only will these services give me their credit report and full account history, it also tells you what their correct SSN is,” Mrs. Holden said.

With Mr. Holden’s permission, I gave the site pictured above an old street address for him plus a made-up SSN, and provided my credit card number to pay for the report. The document generated by that request said TransUnion and Experian were unable to look up his credit score with the information provided. However, Equifax not only provided his current credit score, it helpfully corrected the false data I entered for Holden, providing the last four digits of his real SSN and current address.

“We assume our credit report is keyed off of our SSN or something unique about ourselves,” Mrs. Holden said. “But it’s really keyed off your White Pages information, meaning anyone can get your credit report if they are in the know.”

I was pleased to find that I was unable to pull my own credit score through this exposed online service, although the site still charged me $44. The report produced simply said the consumer in question had requested that access to this information be restricted. But the real reason was simply that I’ve had my credit file frozen for years now.

Many media outlets are publishing stories this week about the one-year anniversary of the breach at Equifax that exposed the personal and financial data on more than 147 million people. But it’s important for everyone to remember that as bad as the Equifax breach was (and it was a total dumpster fire all around), most of the consumer data exposed in the breach has been for sale in the cybercrime underground for many years on a majority of Americans — including access to consumer credit reports. If anything, the Equifax breach may have simply helped ID thieves refresh some of those criminal data stores.

It costs $35 worth of bitcoin through this cybercrime service to pull someone’s credit file from the three major credit bureaus. There are many services just like this one, which almost certainly abuse hacked accounts from various industries that have “legitimate” access to consumer credit reports.

THE FEE-FREE FREEZE

According to the U.S. Federal Trade Commission, when the new law takes effect on September 21, Equifax, Experian and TransUnion must each set up a webpage for requesting fraud alerts and credit freezes.

The law also provides additional ID theft protections to minors. Currently, some state laws allow you to freeze a child’s credit file, while others do not. Starting Sept. 21, no matter where you live you’ll be able to get a free credit freeze for kids under 16 years old.

Identity thieves can and often do target minors, but this type of fraud usually isn’t discovered until the affected individual tries to apply for credit for the first time, at which point it can be a long and expensive road to undo the mess. As such, I would highly recommend that readers who have children or dependents take full advantage of this offering once it’s available for free nationwide.

In addition, the law requires the big three bureaus to offer free electronic credit monitoring services to all active duty military personnel. It also changes the rules for “fraud alerts,” which currently are free but only last for 90 days. With a fraud alert on your credit file, lenders or service providers should not grant credit in your name without first contacting you to obtain your approval — by phone or whatever other method you specify when you apply for the fraud alert.

Under the new law, fraud alerts last for one year, but consumers can renew them each year. Bear in mind, however, that while lenders and service providers are supposed to seek and obtain your approval if you have a fraud alert on your file, they’re not legally required to do this.

A key unanswered question about these changes is whether the new dedicated credit bureau freeze sites will work any more reliably than the current freeze sites operated by the big three bureaus. The Web and social media are littered with consumer complaints — particularly over the past year — about the various freeze sites freezing up and returning endless error messages, or simply discouraging consumers from filing a freeze thanks to insecure Web site components.

It will be interesting to see whether these new freeze sites will try to steer consumers away from freezes and toward other in-house offerings, such as paid credit reports, credit monitoring, or “credit lock” services. All three big bureaus tout their credit lock services as an easier and faster alternative to freezes.

According to a recent post by CreditKarma.com, consumers can use these services to quickly lock or unlock access to credit inquiries, although some bureaus can take up to 48 hours. In contrast, they can take up to five business days to act on a freeze request, although in my experience the automated freeze process via the bureaus’ freeze sites has been more or less instantaneous (assuming the request actually goes through).

TransUnion and Equifax both offer free credit lock services, while Experian’s is free for 30 days and $19.99 for each additional month. However, TransUnion says those who take advantage of their free lock service agree to receive targeted marketing offers. What’s more, TransUnion also pushes consumers who sign up for its free lock service to subscribe to its “premium” lock services for a monthly fee with a perpetual auto-renewal.

Unsurprisingly, the bureaus’ use of the term credit lock has confused many consumers; this was almost certainly by design. But here’s one basic fact consumers should keep in mind about these lock services: Unlike freezes, locks are not governed by any law, meaning that the credit bureaus can change the terms of these arrangements when and if it suits them to do so.

If you’d like to go ahead with freezing your credit files now, this Q&A post from the Equifax breach explains the basics, and includes some other useful tips for staying ahead of identity thieves. Otherwise, check back here later this month for more details on the new free freeze sites.

We Must Slow Innovation in Internet-Connected Things, Says Bruce Schneier [Published articles]

Bruce Schneier argues that governments must step in now to force companies developing connected gadgets to make security a priority rather than an afterthought. Schneier made these arguments in his new book titled, Click Here to Kill Everybody which is on sale now. Here's an excerpt from his interview with MIT Technology Review: Technology Review: So what do we need to do to make the Internet+ era safer? Schneier: There's no industry that's improved safety or security without governments forcing it to do so. Again and again, companies skimp on security until they are forced to take it seriously. We need government to step up here with a combination of things targeted at firms developing internet-connected devices. They include flexible standards, rigid rules, and tough liability laws whose penalties are big enough to seriously hurt a company's earnings. Technology Review: But won't things like strict liability laws have a chilling effect on innovation? Schneier: Yes, they will chill innovation -- but that's what's needed right now! The point is that innovation in the Internet+ world can kill you. We chill innovation in things like drug development, aircraft design, and nuclear power plants because the cost of getting it wrong is too great. We're past the point where we need to discuss regulation versus no-regulation for connected things; we have to discuss smart regulation versus stupid regulation. Technology Review: There's a fundamental tension here, though, isn't there? Governments also like to exploit vulnerabilities for spying, law enforcement, and other activities. Schneier: Governments are certainly poachers as well as gamekeepers. I think we'll resolve this long-standing tension between offense and defense eventually, but it's going to be a long, hard slog to get there.

Share on Google+

Read more of this story at Slashdot.

Pluto Should Be Reclassified as a Planet, Experts Say [Published articles]

The reason Pluto lost its planet status is not valid, according to new research from the University of Central Florida in Orlando. From a report: In 2006, the International Astronomical Union, a global group of astronomy experts, established a definition of a planet that required it to "clear" its orbit, or in other words, be the largest gravitational force in its orbit. Since Neptune's gravity influences its neighboring planet Pluto, and Pluto shares its orbit with frozen gases and objects in the Kuiper belt, that meant Pluto was out of planet status. However, in a new study published online Wednesday in the journal Icarus, UCF planetary scientist Philip Metzger, who is with the university's Florida Space Institute, reported that this standard for classifying planets is not supported in the research literature. Metzger, who is lead author on the study, reviewed scientific literature from the past 200 years and found only one publication -- from 1802 -- that used the clearing-orbit requirement to classify planets, and it was based on since-disproven reasoning. He said moons such as Saturn's Titan and Jupiter's Europa have been routinely called planets by planetary scientists since the time of Galileo. "The IAU definition would say that the fundamental object of planetary science, the planet, is supposed to be a defined on the basis of a concept that nobody uses in their research," Metzger said. "And it would leave out the second-most complex, interesting planet in our solar system." "We now have a list of well over 100 recent examples of planetary scientists using the word planet in a way that violates the IAU definition, but they are doing it because it's functionally useful," he said. "It's a sloppy definition," Metzger said of the IAU's definition. "They didn't say what they meant by clearing their orbit. If you take that literally, then there are no planets, because no planet clears its orbit."

Share on Google+

Read more of this story at Slashdot.

LA libraries replace fines for young readers with in-library "read-offs" [Published articles]

Stan Rehm writes, "An uncommonly sensible new policy in Los Angeles libraries now allows children with overdue book fees to 'read off' their fines in the library." (more…)

Blockchains Are Not Safe For Voting, Concludes NAP Report [Published articles]

The National Academies Press has released a 156-page report, called "Securing the Vote: Protecting American Democracy," concluding that blockchains are not safe for the U.S. election system. "While the notion of using a blockchain as an immutable ballot box may seem promising, blockchain technology does little to solve the fundamental security issues of elections, and indeed, blockchains introduce additional security vulnerabilities," the report states. "In particular, if malware on a voter's device alters a vote before it ever reaches a blockchain, the immutability of the blockchain fails to provide the desired integrity, and the voter may never know of the alteration." The report goes on to say that "Blockchains do not provide the anonymity often ascribed to them." It continues: "In the particular context of elections, voters need to be authorized as eligible to vote and as not having cast more than one ballot in the particular election. Blockchains do not offer means for providing the necessary authorization. [...] If a blockchain is used, then cast ballots must be encrypted or otherwise anonymized to prevent coercion and vote-selling." The New York Times summarizes the findings: The cautiously worded report calls for conducting all federal, state and local elections on paper ballots by 2020. Its other top recommendation would require nationwide use of a specific form of routine postelection audit to ensure votes have been accurately counted. The panel did not offer a price tag for its recommended overhaul. New York University's Brennan Center has estimated that replacing aging voting machines over the next few years could cost well over $1 billion. The 156-page report [...] bemoans a rickety system compromised by insecure voting equipment and software whose vulnerabilities were exposed more than a decade ago and which are too often managed by officials with little training in cybersecurity. Among its specific recommendations was a mainstay of election reformers: All elections should use human-readable paper ballots by 2020. Such systems are intended to assure voters that their vote was recorded accurately. They also create a lasting record of "voter intent" that can be used for reliable recounts, which may not be possible in systems that record votes electronically. [...] The panel also calls for all states to adopt a type of post-election audit that employs statistical analysis of ballots prior to results certification. Such "risk-limiting" audits are designed to uncover miscounts and vote tampering. Currently only three states mandate them.

Share on Google+

Read more of this story at Slashdot.

The future is here today: you can't play Bach on Youtube because Sony says they own his compositions [Published articles]

James Rhodes, a pianist, performed a Bach composition for his Youtube channel, but it didn't stay up -- Youtube's Content ID system pulled it down and accused him of copyright infringement because Sony Music Global had claimed that they owned 47 seconds' worth of his personal performance of a song whose composer has been dead for 300 years. (more…)

Surgical precision... [Published articles]

Why I never finish my Haskell programs [Published articles]

Five Eyes Intelligence Alliance Argues 'Privacy is Not Absolute' in Push For Encryption Backdoors [Published articles]

The Five Eyes, the intelligence alliance between the U.S., U.K., Canada, Australia, and New Zealand, issued a statement warning they believe "privacy is not absolute" and tech companies must give law enforcement access to encrypted data or face "technological, enforcement, legislative or other measures to achieve lawful access solutions." Slashdot reader Bismillah shares a report: The governments of Australia, United States, United Kingdom, Canada and New Zealand have made the strongest statement yet that they intend to force technology providers to provide lawful access to users' encrypted communications. At the Five Country Ministerial meeting on the Gold Coast last week, security and immigration ministers put forward a range of proposals to combat terrorism and crime, with a particular emphasis on the internet. As part of that, the countries that share intelligence with each other under the Five-Eyes umbrella agreement, intend to "encourage information and communications technology service providers to voluntarily establish lawful access solutions to their products and services." Such solutions will apply to products and services operated in the Five-Eyes countries which could legislate to compel their implementation. "Should governments continue to encounter impediments to lawful access to information necessary to aid the protection of the citizens of our countries, we may pursue technological, enforcement, legislative or other measures to achieve lawful access solutions," the Five-Eyes joint statement on encryption said.

Share on Google+

Read more of this story at Slashdot.

How The Shining's Camera Creates Constant Unease [Published articles]

The Shining is a brilliant film, and it’s a scary film, and those two things are for the same reason.

Read more...

The State of Agile Software in 2018 [Published articles]

On the surface, the world of agile software development is bright, since it is now mainstream. But the reality is troubling, because much of what is done is faux-agile, disregarding agile's values and principles, writes programmer Martin Fowler. The three main challenges we should focus on are: fighting the Agile Industrial Complex and its habit of imposing process upon teams, raising the importance of technical excellence, and organizing our teams around products (rather than projects), he added. An anonymous reader shares his post: Now agile is everywhere, it's popular, but there's been an important shift. It was summed up quite nicely by a colleague of mine who said, "In the old days when we talked about doing agile, there was always this pushback right from the beginning from a client, and that would bring out some important conversations that we would have. Now, they say, 'Oh, yeah, we're doing agile already,' but you go in there and you suddenly find there's some very big differences to what we expect to be doing. As ThoughtWorks, we like to think we're very deeply steeped in agile notions, and yet we're going to a company that says, "Yeah, we're doing agile, it's no problem," and we find a very different world to what we expect. Our challenge at the moment isn't making agile a thing that people want to do, it's dealing with what I call faux-agile: agile that's just the name, but none of the practices and values in place. Ron Jeffries often refers to it as "Dark Agile," or specifically "Dark Scrum." This is actually even worse than just pretending to do agile, it's actively using the name "agile" against the basic principles of what we were trying to do, when we talked about doing this kind of work in the late 90s and at Snowbird. So that's our current battle. It's not about getting agile respectable enough to have a crowd like this come to a conference like this, it's realizing that a lot of what people are doing and calling agile, just isn't. We have to recognize that and fight against it because some people have said, "Oh, we're going to 'post-agile,' we've got to come up with some new word," - but that doesn't help the fundamental problem. It's the values and principles that count and we have to address and keep pushing those forwards and we might as well use the same label, but we've got to let people know what it really stands for.

Share on Google+

Read more of this story at Slashdot.

Ask HN: How to organize personal knowledge? [Published articles]

Procrastination Is More About Managing Emotions Than Time, Says Study [Published articles]

An anonymous reader quotes a report from the BBC: [A new study] identified two areas of the brain that determine whether we are more likely to get on with a task or continually put it off. Researchers used a survey and scans of 264 people's brains to measure how proactive they were. Experts say the study, in Psychological Science, underlines procrastination is more about managing emotions than time. It found that the amygdala -- an almond-shaped structure in the temporal (side) lobe which processes our emotions and controls our motivation -- was larger in procrastinators. In these individuals, there were also poorer connections between the amygdala and a part of the brain called the dorsal anterior cingulate cortex (DACC). The DACC uses information from the amygdala and decides what action the body will take. It helps keep the person on track by blocking out competing emotions and distractions. The researchers suggest that procrastinators are less able to filter out interfering emotions and distractions because the connections between the amygdala and the DACC in their brains are not as good as in proactive individuals.

Share on Google+

Read more of this story at Slashdot.

Open Source Devs Reverse Decision to Block ICE Contractors From Using Software [Published articles]

An anonymous reader quotes Motherboard: Less than 24 hours after a software developer revoked access to Lerna, a popular open-source software management program, for any organization that contracted with U.S. immigrations and Customs Enforcement, access has been restored for any organization that wishes to use it and the developer has been removed from the project... The modified version specifically banned 16 organizations, including Microsoft, Palantir, Amazon, Northeastern University, Johns Hopkins University, Dell, Xerox, LinkedIn, and UPS... Although open-source developer Jamie Kyle acknowledged that it's "part of the deal" that anyone "can use open source for evil," he told me he couldn't stand to see the software he helped develop get used by companies contracting with ICE. Kyle's modification of Lerna's license was originally assented to by other lead developers on the project, but the decision polarized the open-source community. Some applauded his principled stand against ICE's human rights violations, while others condemned his violation of the spirit of open-source software. Eric Raymond, the founder of the Open Source Initiative and one of the authors of the standard-bearing Open Source Definition, said Kyle's decision violated the fifth clause of the definition, which prohibits discrimination against people or groups. "Lerna has defected from the open-source community and should be shunned by anyone who values the health of that community," Raymond wrote in a blog post on his website. The core contributor who eventually removed Kyle also apologized for Kyle's licensing change, calling it a "rash decision" (which was also "unenforceable.") Eric Raymond had called the decision "destructive of one of the deep norms that keeps the open source community functional -- keeping politics separated from our work."

Share on Google+

Read more of this story at Slashdot.

After 24 Years Doom 2's Last Secret Has Finally Been Discovered [Published articles]

"Almost 25 years after it was released, Doom 2 has finally given up its last secret..." writes Polygon. An anonymous reader quotes their report: It's secret No. 4 on Map 15 (Industrial Zone). Now, the area in question has been known, seen and accessed by other means (usually a noclip cheat code). Getting to it without a cheat appears to be deliberately impossible, according to Doom co-creator John Romero. Romero tweeted out congratulations to the solution's discoverer, Zero Master. Zero Master figured out that the way to trigger the secret was to be pushed into the secret area by an enemy (in this case, a Pain Elemental). Apparently the secret sector was an area just below the floor of a teleporter -- but entering that teleporter meant players rose up to the level of the teleporter's floor, according to Romero, so "you never enter the sector... you would never get inside the teleporter sector to trigger the secret." One Reddit user notes Zero Master "has the first legit Doom 2 100% save file on earth, after 24 years."

Share on Google+

Read more of this story at Slashdot.

FCC Can Define Markets With Only One ISP as 'Competitive', Court Rules [Published articles]

An appeals court has upheld a Federal Communications Commission ruling that broadband markets can be competitive even when there is only one Internet provider. From a report: The FCC "rationally chose which evidence to believe among conflicting evidence," the court ruling said. The FCC voted last year to eliminate price caps imposed on some business broadband providers such as AT&T and Verizon. The FCC decision eliminated caps in any given county if 50 percent of potential customers "are within a half mile of a location served by a competitive provider." This is known as the "competitive market test." Because of this, broadband-using businesses might not benefit from price controls even if they have just one choice of ISP.

Share on Google+

Read more of this story at Slashdot.

Linux Turns 27 [Published articles]

It's been 27 years since Linus Torvalds let a group of people know about his "hobby" OS. OMGUbuntu blog writes: Did you know that Linux, like Queen Elizabeth II, actually has two birthdays? Some FOSS fans consider the first public release of (prototype) code, which dropped on October 5, 1991, as more worthy of being the kernel's true anniversary date. Others, ourselves included, take today, August 25, as the "birth" date of the project. And for good reason. This is the day on which, back in 1991, a young Finnish college student named Linus Torvalds sat at his desk to let the folks on comp.os.minix newsgroup know about the "hobby" OS he was working on. The "hobby OS" that wouldn't, he cautioned, be anything "big" or "professional." Even as Linux continues to have lion's share in the enterprise world, it has only managed to capture a tiny fraction of the consumer space. Further reading: Ask Slashdot: Whatever Happened To the 'Year of Linux on Desktop'? Which Linux-based distro do you use? What changes, if any, would you like to see in it in the next three years?

Share on Google+

Read more of this story at Slashdot.

New Tech Lets Submarines 'Email' Planes [Published articles]

A way for submerged submarines to communicate with planes has been developed by researchers at MIT. From a report: At present, it is difficult for planes to pick up underwater sonar signals because they reflect off the water's surface and rarely break through. The researchers found an extremely high-frequency radar could detect tiny ripples in water, created by an ordinary underwater speaker. This could let lost flight recorders and submarines communicate with planes. Submarines communicate using sonar waves, which travel well underwater but struggle to break through the surface. Planes communicate using radio signals that do not travel well in water. At present, submarines can surface to send messages - but this risks revealing their location. Sometimes, buoys are used to receive sonar signals and translate them into radio signals. "Trying to cross the air-water boundary with wireless signals has been an obstacle," said Fadel Adib, from the MIT Media Lab. The system developed at MIT uses an underwater speaker to aim sonar signals directly at the water's surface, creating tiny ripples only a few micrometres in height. These ripples can be detected by high-frequency radar above the water and decoded back into messages.

Share on Google+

Read more of this story at Slashdot.

Watery exoplanets may be common, but not very friendly [Published articles]

Artist's concept of exoplanets similar to Earth

Watery planets beyond the Solar System may be more common than previously thought, making up 35 percent of exoplanets two to four times the size of the Earth. According to a new study, data from the Kepler Space Telescope and the Gaia mission indicate that many planets are made up of half water by mass as opposed to the 0.02 percent water that the Earth has.

.. Continue Reading Watery exoplanets may be common, but not very friendly

Category: Space

Tags:

Stuff The Internet Says On Scalability For August 17th, 2018 [Published articles]

Hey, it's HighScalability time:

 

The amazing Zoomable Universe from 10^27 meters—about 93 billion light-years—down to the subatomic realm, at 10^-35 meters.

 

Do you like this sort of Stuff? Please lend me your support on Patreon. It would mean a great deal to me. And if you know anyone looking for a simple book that uses lots of pictures and lots of examples to explain the cloud, then please recommend my new book: Explain the Cloud Like I'm 10. They'll love you even more.

 

  • 2.24x10^32T: joules needed by the Death Star to obliterate Alderaan, which would liquify everyone in the Death Star; 13 of 25: highest paying jobs are in tech; 70,000+: paid Slack workspaces; 13: hours ave american sits; $13.5 million: lost in ATM malware hack; $1.5 billion: cryptocurrency gambling ring busted in China; $8.5B: Auto, IoT, Security startups; 10x: infosec M&A; 1,000: horsepower needed to fly a jet suit; 30% Google's energy savings from AI control of datacenters;

  • Quotable Quotes:
    • The Jury Is In: From the security point of view, the monolithic OS design is flawed and a root cause of the majority of compromises. It is time for the world to move to an OS structure appropriate for 21st century security requirements.
    • @coryodaniel: Rewrote an #AWS APIGateway & #lambda service that was costing us about $16000 / month in #elixir. Its running in 3 nodes that cost us about $150 / month. 12 million requests / hour with sub-second latency, ~300GB of throughput / day. #myelixirstatus !#Serverless...No it’s not Serverless anymore it’s running in a few containers on a kubernetes cluster
    • @cablelounger: OH: To use AWS offerings, you really need in-house dev-ops expertise vs GCP, they make dev ops transparent to you     I've a lot of experience with AWS and mostly agree with the first point. I haven't really used GCP in earnest. I'd love to hear experiences from people who have?
    • @allspaw: engineer: “Unless you’re familiar with Lamport, Brewer, Fox, Armstrong, Stonebraker, Parker, Shapiro...(and others) you don’t know distributed systems.” also engineer: “I read ‘Thinking Fast and Slow’ therefore I know cognitive psychology and decision-making theory.”
    • alankay1: To summarize here, I said I love "Rocky's Boots", and I love the basic idea of "Robot Odyssey", but for end-users, using simple logic gates to program multiple robots in a cooperative strategy game blows up too much complexity for very little utility. A much better way to do this would be to make a "next Logo" that would allow game players to make the AI brains needed by the robots. So what I actually said, is that doing it the way you are doing it will wind up with a game that is nxot successful or very playable. Just why they misunderstood what I said is a bit of a mystery, because I spelled out what could be really good for the game (and way ahead of what other games were doing). And of course it would work on an Apple II and other 8 bit micros (Logo ran nicely on them, etc.)
    • Michael Malone: Nolan was the first guy to look at Moore’s law and say to himself: You know what? When logic and memory chips get to be under ten bucks I can take these big games and shove them into a pinball machine.
    • @hichaelmart: To be honest, I think the main lesson from this is that API Gateway is expensive – 100% agree. We have a GAE app doing a very similar thing, billions of impressions/mth – and *much* cheaper than if it were on API Gateway.
    • Keep on reading for many more quotes hot off the internet. You'll be a better person.
Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

SNES.party lets you play Super Nintendo with your friends [Published articles]

Hot on the heels of the wonderful NES.party comes Haukur Rosinkranz’s SNES.party, a site that lets you play Super Nintendo with all your buds.

Rosinkranz is Icelandic but lives in Berlin now. He made NES.party a year ago while experimenting with WebRTC and WebSockets and he updated his software to support the SNES.

“The reason I made it was simply because I discovered how advanced the RTC implementation in Chrome had become and wanted to do something with it,” he said. “When I discovered that it’s possible to take a video element and stream it over the network I just knew I had to do something cool with this and I came up with the idea of streaming emulators.”

He said it took him six months to build the app and a month to add NES support.

“It’s hard to say how long it took because I basically created my own framework for web applications that need realtime communication between one or more participants,” he said. He is a freelance programmer.

It’s a clever hack that could add a little fun to your otherwise dismal day. Feel like a little Link to the Past? Pop over here and let’s play!

Topple Track Attacks EFF and Others With Outrageous DMCA Notices [Published articles]

Update August 10, 2018: Google has confirmed that it has removed Topple Track from its Trusted Copyright Removal Program membership due to a pattern of problematic notices.

Symphonic Distribution (which runs Topple Track) contacted EFF to apologize for the improper takedown notices. It said that “bugs within the system that resulted in many whitelisted domains receiving these notices unintentionally.” Symphonic Distribution said that it had issued retraction notices and that it was working to resolve the issue. While we appreciate the apology, we are skeptical that its system is fixable, at least via whitelisting domains. Given the sheer volume of errors, the problem appears to be with Topple Track’s search algorithm and lack of quality control, not just with which domains they search.

At EFF, we often write about abuse of the Digital Millennium Copyright Act (DMCA) takedown process. We even have a Hall of Shame collecting some of the worst offenders. EFF is not usually the target of bad takedown notices, however. A company called Topple Track has been sending a slew of abusive takedown notices, including false claims of infringement levelled at news organizations, law professors, musicians, and yes, EFF.

Topple Track is a “content protection” service owned by Symphonic Distribution. The company boasts that it is “one of the leading Google Trusted Copyright Program members.” It claims:

Once we identify pirated content we send out automated DMCA takedown requests to Google to remove the URLs from their search results and/or the website operators. Links and files are processed and removed as soon as possible because of Topple Track’s relationship with Google and file sharing websites that are most commonly involved in the piracy process.

In practice, Topple Track is a poster child for the failure of automated takedown processes.

Topple Track’s recent DMCA takedown notices target so much speech it is difficult to do justice to the scope of expression it has sought to delist. A sample of recent improper notices can be found here, here, here, and here. Each notice asks Google to delist a collection of URLs. Among others, these notices improperly target:

Other targets include an article about the DMCA in the NYU Law Review, an NBC News article about anti-virus scams, a Variety article about the Drake-Pusha T feud, and the lyrics to ‘Happier’ at Ed Sheeran’s official website. It goes on and on. If you search for Topple Track’s DMCA notices at Lumen, you’ll find many more examples.

The DMCA requires that the sender of a takedown notice affirm, under the penalty of perjury, that the sender has a good faith belief that the targeted sites are using the copyrighted material unlawfully. Topple Tracks notices are sent on behalf of a variety of musicians, mostly hip-hop artists and DJs. We can identify no link—let alone a plausible claim of infringement—between the pages mentioned above and the copyrighted works referenced in Topple Track’s takedown notices.

The notice directed at an EFF page alleges infringement of “My New Boy” by an artist going by the name “Luc Sky.” We couldn’t find any information about this work online. Assuming this work exists, it certainly isn’t infringed by an out-of-date case page that has been languishing on our website for more than eight years. Nor is it infringed by Eric Goldman’s blog post (which has more recent news about the EMI v MP3Tunes litigation). 

EMI v. MP3Tunes was a case about a now-defunct online storage service called MP3Tunes. The record label EMI sued the platform for copyright infringement based on the alleged actions of some of its users. But none of this has any bearing on Luc Sky. MP3Tunes has been out of business for years.

It is important to remember than even the most ridiculous takedown notices can have real consequences. Many site owners will never even learn that their URL was targeted. For those that do get notice, very few file counternotices. These users may get copyright strikes and thereby risk broader disruptions to their service. Even if counternotices are filed and processed fairly quickly, material is taken down or delisted in the interim. In Professor Goldman’s case, Google also disabled AdSense on the blog post until his counternotice became effective.

We cannot comprehend how Topple Track came to target EFF or Eric Goldman on behalf of Luc Sky. But given the other notices we reviewed, it does not appear to be an isolated error. Topple Track’s customers should also be asking questions. Presumably they are paying for this defective service.

While Topple Track is a particularly bad example, we have seen many other cases of copyright robots run amok. We reached out to Google to ask if Topple Track remains part of its trusted copyright program but did not hear back. At a minimum, it should be removed from any trusted programs until it can prove that it has fixed its problems.