For the first time, scientists have caught two neutron stars in the act of colliding, revealing that these strange smash-ups are the source of heavy elements such as gold and platinum. From a report: The discovery, announced today at a news conference and in scientific reports written by some 3,500 researchers, solves a long-standing mystery about the origin of these heavy elements -- which are found in everything from wedding rings to cellphones to nuclear weapons. It's also a dramatic demonstration of how astrophysics is being transformed by humanity's newfound ability to detect gravitational waves, ripples in the fabric of space-time that are created when massive objects spin around each other and finally collide. "It's so beautiful. It's so beautiful it makes me want to cry. It's the fulfillment of dozens, hundreds, thousands of people's efforts, but it's also the fulfillment of an idea suddenly becoming real," says Peter Saulson of Syracuse University, who has spent more than three decades working on the detection of gravitational waves. Albert Einstein predicted the existence of these ripples more than a century ago, but scientists didn't manage to detect them until 2015. Until now, they'd made only four such detections, and each time the distortions in space-time were caused by the collision of two black holes. That bizarre phenomenon, however, can't normally be seen by telescopes that look for light. Neutron stars, by contrast, spew out visible cosmic fireworks when they come together. These incredibly dense stars are as small as cities like New York and yet have more mass than our sun. Further reading: 'A New Rosetta Stone for Astronomy' (The Atlantic), and Gravitational Wave Astronomers Hit Mother Lode (Scientific American).
Read more of this story at Slashdot.
Astronomers have detected the collision of a pair of dead stars, called a kilonova, that caused a cosmic ripple of gravitational waves around 130 million years ago. It turns out these kinds of massive explosions forged most of the gold, silver, and other heavy elements in our universe. From Nadia Drake's story in National Geographic:
First theorized by Albert Einstein in 1916, gravitational waves are kinks or distortions in the fabric of spacetime caused by extremely violent cosmic events. Until now, all confirmed detections involved a deadly dance between two black holes, which leave no visible signature on the sky.
But with this latest event, teams using about a hundred instruments at roughly 70 observatories were able to track down and watch the cataclysm in multiple wavelengths of light, allowing astronomers to scrutinize the source of these cosmic ripples for the first time.
“We saw a totally new phenomenon that has never before been seen by humans,” says Andy Howell of the University of California, Santa Barbara. “It’s an amazing thing that may not be duplicated in our lifetimes.”
Unlike colliding black holes, shredded neutron stars expel metallic, radioactive debris that can be seen by telescopes—if you know when and where to look.
“We felt the universe shaking from two neutron stars merging together, and that told us where to go and point our telescopes,” says Howell, whose team was among several that chased down the stars tied to the gravitational wave signal.
Original Release date: 16 Oct 2017 | Last revised: 16 Oct 2017
Wi-Fi Protected Access II (WPA2) handshake traffic can be manipulated to induce nonce and session key reuse, resulting in key reinstallation by a wireless access point (AP) or client. An attacker within range of an affected AP and client may leverage these vulnerabilities to conduct attacks that are dependent on the data confidentiality protocols being used. Attacks may include arbitrary packet decryption and injection, TCP connection hijacking, HTTP content injection, or the replay of unicast and group-addressed frames.
CWE-323: Reusing a Nonce, Key Pair in Encryption
Wi-Fi Protected Access II (WPA2) handshake traffic can be manipulated to induce nonce and session key reuse, resulting in key reinstallation by a victim wireless access point (AP) or client. After establishing a man-in-the-middle position between an AP and client, an attacker can selectively manipulate the timing and transmission of messages in the WPA2 Four-way, Group Key, Fast Basic Service Set (BSS) Transition, PeerKey, Tunneled Direct-Link Setup (TDLS) PeerKey (TPK), or Wireless Network Management (WNM) Sleep Mode handshakes, resulting in out-of-sequence reception or retransmission of messages. Depending on the data confidentiality protocols in use (e.g. TKIP, CCMP, and GCMP) and situational factors, the effect of these manipulations is to reset nonces and replay counters and ultimately to reinstall session keys. Key reuse facilitates arbitrary packet decryption and injection, TCP connection hijacking, HTTP content injection, or the replay of unicast, broadcast, and multicast frames.
For a detailed description of these issues, refer to the researcher's website and paper.
An attacker within the wireless communications range of an affected AP and client may leverage these vulnerabilities to conduct attacks that are dependent on the data confidentiality protocol being used. Impacts may include arbitrary packet decryption and injection, TCP connection hijacking, HTTP content injection, or the replay of unicast, broadcast, and multicast frames.
|Vendor||Status||Date Notified||Date Updated|
|Aruba Networks||Affected||28 Aug 2017||09 Oct 2017|
|Cisco||Affected||28 Aug 2017||16 Oct 2017|
|Espressif Systems||Affected||22 Sep 2017||13 Oct 2017|
|Fortinet, Inc.||Affected||28 Aug 2017||16 Oct 2017|
|FreeBSD Project||Affected||28 Aug 2017||12 Oct 2017|
|Affected||28 Aug 2017||16 Oct 2017|
|HostAP||Affected||30 Aug 2017||16 Oct 2017|
|Intel Corporation||Affected||28 Aug 2017||10 Oct 2017|
|Juniper Networks||Affected||28 Aug 2017||16 Oct 2017|
|Microchip Technology||Affected||28 Aug 2017||16 Oct 2017|
|Microsoft Corporation||Affected||28 Aug 2017||16 Oct 2017|
|OpenBSD||Affected||28 Aug 2017||16 Oct 2017|
|Peplink||Affected||28 Aug 2017||16 Oct 2017|
|Red Hat, Inc.||Affected||28 Aug 2017||04 Oct 2017|
|Samsung Mobile||Affected||28 Aug 2017||12 Oct 2017|
Thanks to Mathy Vanhoef of the imec-DistriNet group at KU Leuven for reporting these vulnerabilities. Mathy thanks John A. Van Boxtel for finding that wpa_supplicant v2.6 is also vulnerable to CVE-2017-13077.
The CERT/CC also thanks ICASI for their efforts to facilitate vendor collaboration on addressing these vulnerabilities.
This document was written by Joel Land.
If you have feedback, comments, or additional information about this vulnerability, please send us email.
Russia will issue its own official cryptocurrency, the CryptoRuble, capping months of speculation about the country’s approach to the technology. While in a way it indicates an embrace of the likes of Bitcoin and Ethereum, the CryptoRuble is unlikely to share the truly decentralized nature of other coins. Read More
You may remember that last year, Verizon was punished by the FCC for injecting information into its subscribers’ traffic that allowed them to be tracked without their consent. That practice appears to be alive and well despite being disallowed in a ruling last March: companies appear to be able to request your number, location, and other details from your mobile provider quite easily. Read More
Wireless networks already have to deal with increasingly crowded waves, and that's only going to get worse when 5G rolls around. Any boost to the signal could lead to a big jump in performance, especially when you're using very high frequencies that...
"Weird Al" Yankovic is switching things up for his upcoming North American tour. He's putting away all his props, costumes, and video screens and pulling out "obscure songs you barely remember."
He's calling it the "Ridiculously Self-Indulgent, Ill-Advised Vanity Tour".
Here's what he wrote on Facebook about it:
In case you haven’t heard the rumors… THIS WILL NOT BE OUR NORMAL KIND OF TOUR. I decided we should try something different, just for a change of pace. So next year we’re scaling way, way back. No costumes, no props, no video screens, no computer servers. We’re just going to walk out on stage, sit down on stools, and play a bunch of old songs. Oh, and we’re going to be performing almost exclusively originals (i.e. not parodies). The deep cuts and obscure tracks. The songs that were never hits. The ones you barely remember.
Okay, obviously this tour is not for everybody. By design, it has extremely limited appeal. Instead of doing festivals, fairs and arenas, we’ll be doing small, intimate theatres. Instead of putting on a big flashy production, we’ll be trying to go for something very informal and low-key… kind of an Unplugged/Storytellers vibe. Like we’re just hanging out, playing in your living room. So if you’ve really got your heart set on seeing fat suits and Segways and hearing all your favorite parodies… this probably isn’t the tour for you. Chances are we’ll be doing that kind of show again sometime in the future, just not THIS time.
"How can the police induce citizens to help investigate crime? By trying to make it 'cool' and turning it into a game that awards points for hits," reports CSO. mrwireless writes: Through their 'police of the future' innovation initiative, and inspired by Pokemon Go, the Dutch police are building an app where you can score points by photographing the license plates of stolen cars. When a car is reported stolen the app will notify people in the neighbourhood, and then the game is on! Privacy activists are worried this creates a whole new relationship with the police, as a deputization of citizens blurs boundaries, and institutionalizes 'coveillance' -- citizens spying on citizens. It could be a slippery slope to situations that more resemble the Stasi regime's, which famously used this form of neighborly surveillance as its preferred method of control. CSO cites Spiegel Online's description of the unofficial 189,000 Stasi informants as "totally normal citizens of East Germany who betrayed others: neighbors reporting on neighbors, schoolchildren informing on classmates, university students passing along information on other students, managers spying on employees and Communist bosses denouncing party members." The Dutch police are also building another app that allows citizens to search for missing persons.
Read more of this story at Slashdot.
An anonymous reader quotes a report from Engadget: 10.9 million U.S. driver's licenses were stolen in the massive breach that Equifax suffered in mid-May, according to a new report by The Wall Street Journal. In addition, WSJ has revealed that the attackers got a hold of 15.2 million UK customers' records, though only 693,665 among them had enough info in the system for the breach to be a real threat to their privacy. Affected customers provided most of the driver's licenses on file to verify their identities when they disputed their credit-report information through an Equifax web page. That page was one of the entry points the attackers used to gain entry into the credit reporting agency's system.
Read more of this story at Slashdot.
“One of the markers of [the] current age is that we’re starting to talk about who sets the classics,” The Stone Sky author N.K. Jemisin said at NYCC’s recent panel The New Classics of SFF. In response to moderator Petra Mayer’s (from NPR Books) opening question—what makes a classic work of SFF?—Jemisin explained that having conversations about whose stories are central helps to expand what constitutes the canon of science fiction and fantasy works. The notion of a canon was Provenance author Ann Leckie’s contribution, likening it to her study of the classical canon of music in college. But where she received her training from one or two handpicked textbooks, today’s readers have the internet, which allows for so many conversations to be seen simultaneously. Leckie made the argument that there is no longer “a single list of canonical classics, but a bunch of intersecting and interpenetrating lists.”
Here Jemisin respectfully disagreed, pointing out that the “literary commons are not open to everybody just yet” and that there are still divides to be breached in terms of internet access. In fact, she said, “I don’t know how I feel about a canon anymore. … The sheer volume of books that exist out there means that a canon is no longer possible.” Instead, she focused on the notion of classics themselves, defining them as “the books that change your thinking, that blow your mind, that reorder your world.”
That could easily describe both writers’ series: Leckie’s Imperial Radch trilogy, with its thoughtful meditation on gender in a futuristic, space-faring human species, and Jemisin’s Broken Earth trilogy, which masterfully combines epic, apocalyptic fantasy with wrenching emotional stakes. As Hugo Award winners and “two of the most ass-kicking, mind-blowing writers working today” (as Mayer introduced them), they’re perfectly situated to talk about shifting notions of what makes a classic in the genre. Another fascinating angle is that both are active on social media, engaging with readers in ways that only one generation of authors have thus far.
When asked how authors’ social media presence and readers’ ability to “process the personality along with the writing” would affect the perception of classics, Jemisin looked at the attendees and said, “Raise your hands if you still think of Ender’s Game as a classic. My guess is if I had asked that 10 or 15 years ago, the number would be larger.” She went on to say, “Knowing about authors’ beliefs helps you understand how those beliefs influence their writing, and things you thought meant one thing, once you’ve got enough information about that writer, you suddenly realize mean an entirely different thing. That makes a difference. … And that’s not necessarily a bad thing.”
“Nothing means anything without a context,” Leckie added.
“I think the people who believe that works can and always should be divorced from the context are people who have the privilege to do so,” Jemisin said.
Speaking to a different sort of context, Mayer pointed out that SFF is often automatically perceived as a metaphor for contemporary issues in society at the time it is written. “That’s a lot of emotional labor,” she remarked, asking if either author ever wanted to tell people to just read the story. “I can’t speak for other writers,” Leckie responded, “but I don’t sit down and say, ‘Now I’m going to tell a story that critiques our society and culture’; I sit down and say, ‘Now I’m going to tell a story about a sentient spaceship with a thousand bodies.’ … In the end I wind up saying a thing because stories say things. … The nature of science fiction is that it will make a comment about society because we are writing within our certain context.”
To that end, panel addressed important areas for readers, critics, and authors to unpack; for instance, how having white people be the central race in a story is not a neutral narrative choice, with Mayer asking if the authors believed that things are starting to change.
“It is changing,” Jemisin said, “because the pushback tells us it is changing.” She went on to describe “the people who know full well that whiteness and maleness and straightness have meaning—the people who like that is has meaning—the people who like that its meaning is centrality and, in their mind, superiority, and who like the privilege that come with those things,” and how she has perceived that population’s reactions to “[t]he slow changes that we’re beginning to see in all of the media forms and entertainment forms that exist out there— they know that that shapes how we think about reality. They know full well that we didn’t start thinking that a black president was a thing until we started to see a bunch of them on TV, until we started to imagine them in our media. If you can imagine something, it will be.”
Today EFF and Public Knowledge are releasing a whitepaper titled Which Internet registries offer the best protection for domain owners? Top-level domains are the letters after the dot, like .com, .uk, .biz, or .mobi. Since 2003, hundreds of new top-level domains have come onto the market, and there has never been more choice for domain name registrants. But apart from choosing a name that sounds right and is easy to remember, a domain name registrant should also consider the policies of the registry that operates the domain, and those of the registrar that sells it to them.
To draw one example of out of our whitepaper, if you're running a website to criticize an established brand and you use that brand as part of your domain name, it may be wise to avoid registering it in a top-level domain that offers special rights and procedures to brand owners, that could result in your domain name being wrongly taken away or could embroil you in dispute settlement proceedings.
This probably means you'll want to think twice about registering in any of the newer global top-level domains (gTLDs), which provide brand owners access to a privately-run Trademark Clearinghouse that gives them veto powers that go far beyond those they would receive under the trademark law of the United States or those of most other countries.
For example, under U.S. trademark law, if a trademark applicant sought to register an ordinary word such as smart, forex, hotel, one, love, cloud, nyc, london, abc, or luxury, they would have to specify the category of goods or services they provide, and protection for the mark might only be extended to its use in a logo, rather than as a plain word. Yet each of the plain words above has been registered in the Trademark Clearinghouse, to prevent them being used in any of the new gTLDs without triggering a warning to prospective registrants about possible infringement.
This applies regardless of whether the planned usage covers the same category of goods or services as the original trademark—indeed there isn't even any way for the registrant to find out what that category was, or even which country accepted the mark for registration, because the contents of the Trademark Clearinghouse database are secret. And since 94% of prospective registrants abandon their attempted registration of a domain after receiving a trademark warning, this has a drastic chilling effect on speech.
EFF is currently participating in an ICANN working group fighting to ensure that brand owners' veto rights aren't extended even further (for example to catch domains that include typos of brand names), and to prevent these outrageous rules being applied to older gTLDs such as .com, .net, and .org. But for now, you can minimize your exposure to trademark bullying by avoiding registering your website in one of the new domains that is subject to these unfair policies. Our whitepaper explains what to look for.
The same considerations apply if you're setting up a website that could fall subject to bullying from copyright holders. In this category, we draw attention to the policies of registries Donuts and Radix that have established private deals with the Motion Picture Association of America (MPAA) appointing it as a "trusted notifier" to initiate a registry-level take down of websites that it claims are engaged in extensive copyright infringement.
Our whitepaper illustrates why remedies for copyright infringement on the Internet should not come from the domain name system, and in particular should not be wielded by commercial actors in an unaccountable process. Organizations such as the MPAA are not known for advancing a balanced approach to copyright enforcement.
To avoid having your website taken down by your domain registry in response to a copyright complaint, our whitepaper sets out a number of options, including registering in a domain whose registry requires a court order before it will take down a domain, or at the very least one that doesn't have a special arrangement with the MPAA or another special interest for the streamlined takedown of domains. For example, it was recently reported that the registry for Costa Rica's .cr domain has been resisting extralegal demands from the U.S. Embassy to delete the domain "ThePirateBay.cr" without a court order.
Copyright and trademark disputes aren't the only grounds on which domain name registries can be asked to suspend or cancel your domain name. They are also frequently asked to do this because the website associated with the domain is hosting content or selling products that are unlawful or against their acceptable use policies. That's why it's important to know what those policies are, how and by whom a breach of those policies is decided, and what national law or laws are taken into consideration. An appendix to our whitepaper breaks this down.
EFF's default position, drawn from the Manila Principles on Intermediary Liability, is that the only way that a registry should be forced to take down a domain because of illegal content on a website is if that determination is made by a court. And if the takedown is for a terms of service violation rather than for a violation of law, the registrant ought to be entitled to due process, including in most cases a right to be heard before any action is taken.
Online pharmacies are an example of a type of website that attracts a lot of pressure upon registries to remove domains without a court order. (LegitScript, a contractor to major U.S. drug companies, regularly boasts about the thousands of websites it has caused to be suspended through its shadowy partnerships with domain registries and registrars.) In cases of the worst of these websites, those that openly sell drugs such as opioids without prescription, their readiness to proactively enforce their acceptable use policies is understandable.
Unfortunately however, just as it is a mistake to partner with the MPAA over copyright enforcement, it is a mistake to partner with Big Pharma in enforcing pharmaceutical licensing regulations. This results in overreaching enforcement that blocks even legitimate, locally-regulated online pharmacies throughout the world, principally based of the laws of just one country (the USA) that prohibits overseas online pharmacies from selling to U.S. citizens. (Access to medicines activists have proposed a more nuanced set of principles on medicine sales online.)
Extending this example, we would never accept Internet registries being pressured to apply Russia's anti-LGBT laws, nor the Turkish or Thai laws against criticism of those countries' leaders, to take domains down globally. And there a whole host of such laws that might apply to a domain that a registrant might innocently register, in full compliance with the laws of their own country. Our whitepaper explains how they can minimize the risk of their domain being taken down globally because it may infringe some other country's national law.
Finally, our whitepaper explains how some registries and registrars do a better job at protecting the privacy of domain name registrants than others. For example, there are country-code domains that don't provide public access to registrants' information at all, and some registrars that offer registrants a free privacy proxy registration service. For those that don't offer such a service for free, such proxy registration services are also commercially available to increase the privacy of your registration in any top-level domain.
No matter whether your priority is to protect your domain against trademark or copyright bullies or overseas speech regulators, or to protect the privacy of your personal information, our whitepaper also outlines an often-overlooked option: to host your website as a Tor hidden service. A Tor hidden service is a website with a special pseudo-domain .onion, which makes it more much resilient to censorship than an ordinary website, and if the website operator chooses, also more anonymous. The downside of this is that it can only be accessed by users using the Tor browser, so it may not be the best choice for a domain that is meant to be accessible to a large audience.
The domain names we use to connect to websites and Internet services are one of the weak links for free speech online: a potential point of control for governments and businesses to regulate others' online speech and activity. Choosing top-level domains carefully is one step you can take to protect your rights.
There was a time, not that long ago, when if you told people you were a science fiction fan they would ask you—no doubt thinking of The X-Files—whether you really believed in aliens. My usual response was to reply, putting a gentle emphasis on the second word, that it’s called science fiction for a reason. But the fact is that I did, and do, believe in aliens … but not in that way.
Of course I do believe that there are intelligent alien species out there in the universe somewhere (though the Fermi Paradox is troubling, and the more I learn about the peculiar twists and turns that the evolution of life on this planet has taken to get to this point the more I wonder if we might, indeed, be alone in the universe), but I don’t believe that they have visited Earth, at least not in noticeable numbers or in recent history. But I do believe in aliens as people—as complex beings with knowable, if not immediately comprehensible, motives, who can be as good and bad as we can, and not just monsters who want to eat us or steal our water or our breeding stock. And I can date this belief to a specific book.
I was twelve or thirteen when my older cousin Bill came from California to live with us for a summer. At one point during his stay he had a box of old paperbacks to get rid of, and he offered me my choice before taking them to the used book store. One of the books I snagged that day was Hospital Station by James White. It was the cover that grabbed me, I think: a realistic painting of a space hospital—a clear ripoff of Discovery from 2001, but adorned with red crosses. The concept of a hospital in space promised drama, excitement, and tension, and the book did not disappoint. But better than that, it changed my mind and my life in some important ways.
Up until that time I had generally encountered aliens only as villains, or even monsters—the Metaluna Mutants from This Island Earth, the hideous creatures from Invasion of the Saucer-Men, the Martians from War of the Worlds, The Blob. True, there was Spock, but he scarcely seemed alien, and besides there was only one of him. Even in prose fiction (I had recently read Ringworld) the aliens were more nuanced, but still fundamentally adversarial to humanity; alien species tended to appear as stand-ins for either thematic concepts or for other nations or races of humans. But in Hospital Station, for the first time, I found aliens who were truly alien—strange and very different—but nonetheless allies, co-workers, and friends.
Hospital Station is a collection of five stories showing the construction and evolution of the eponymous station—Sector Twelve General Hospital—in a universe with so many intelligent species that a standard four-letter code has been developed to quickly categorize their physiology, behavior, and environmental needs. To accommodate those widely varying environmental needs, the station is divided into many sections, each with atmosphere, gravity, and temperature suitable for its usual occupants. A universal translator ameliorates the problems of communication between species, but—and this is critical—it is not perfect, nor can it immediately comprehend the languages of new aliens; it must be brought up to speed when a new species is encountered. Also, eliminating the language problem doesn’t prevent miscommunications and cultural conflicts.
But despite the conflicts that do exist between species in this universe, the primary problems that face the characters in Hospital Station are those that face any doctors in any hospital on Earth: healing the sick, solving medical mysteries, and preventing the spread of disease. The conflicts are interpersonal, the villains are diseases or physical processes, and the tension is generally provided by a race to heal or cure in time rather than a need to destroy or prevent destruction. It’s not that there is no war in this universe, but the army—the interspecies Monitor Corps—is barely seen in this volume and exists primarily to prevent war rather than to wage it. It is a fundamentally optimistic universe in which the main characters, of widely diverse species with different needs, personalities, and priorities, are primarily cooperating to solve problems rather than competing against one another.
This was the first time I had encountered this type of aliens and I devoured the book with gusto. Even better, I discovered it was the first in a series, which continued until 1999. I soon learned that many other such fictional universes existed—including, to some extent, later incarnations of Star Trek—and eventually I began writing about them myself. The Martians and Venusians in my Arabella Ashby books are intended to be people who, though their bodies, language, and culture may be different from ours, are worth getting to know.
The stories in Hospital Station were written between 1957 and 1960, and they may seem rather quaint by today’s standards (the portrayal of women is particularly eyeroll-worthy). But it served to introduce to me a concept which we now summarize as “diversity”—the importance of representing and accommodating different kinds of people, with different points of view, who can by their very differences improve everyone’s lives by bringing their unique perspectives to bear on our common problems. Unlike the purely villainous aliens of Invasion of the Body Snatchers or The Thing, these aliens are complex beings, and even when we disagree we can work together to find common cause. And though this view of diversity can sometimes seem facile and overly optimistic, I think it’s better to hope for the best than to live in fear of the worst.
The Sector General novels—of which Hospital Station is the first—are available in omnibus editions from Tor Books.
David D. Levine is the author of the novel Arabella of Mars , its sequel Arabella and the Battle of Venus, and over fifty science fiction and fantasy stories. His story “Tk’Tk’Tk” won the Hugo Award, and he has been shortlisted for awards including the Hugo, Nebula, Campbell, and Sturgeon. Stories have appeared in Asimov’s, Analog, F&SF, Tor.com, numerous Year’s Best anthologies, and his award-winning collection Space Magic.
<martin> "PHP is a minor evil perpetrated and created by incompetent amateurs, whereas Perl is a great and insidious evil, perpetrated by skilled but perverted professionals."
<mking> So what does that make Java?
<zstevens> a DSL for converting XML to stack traces
Sonic Drive-In, a fast-food chain with nearly 3,600 locations across 45 U.S. states, has acknowledged a breach affecting an unknown number of store payment systems. The ongoing breach may have led to a fire sale on millions of stolen credit and debit card accounts that are now being peddled in shadowy underground cybercrime stores, KrebsOnSecurity has learned.
The first hints of a breach at Oklahoma City-based Sonic came last week when I began hearing from sources at multiple financial institutions who noticed a recent pattern of fraudulent transactions on cards that had all previously been used at Sonic.
I directed several of these banking industry sources to have a look at a brand new batch of some five million credit and debit card accounts that were first put up for sale on Sept. 18 in a credit card theft bazaar previously featured here called Joker’s Stash:
This batch of some five million cards put up for sale today (Sept. 26, 2017) on the popular carding site Joker’s Stash has been tied to a breach at Sonic Drive-In. The first batch of these cards appear to have been uploaded for sale on Sept. 15.
Sure enough, two sources who agreed to purchase a handful of cards from that batch of accounts on sale at Joker’s discovered they all had been recently used at Sonic locations.
Armed with this information, I phoned Sonic, which responded within an hour that it was indeed investigating “a potential incident” at some Sonic locations.
“Our credit card processor informed us last week of unusual activity regarding credit cards used at SONIC,” reads a statement the company issued to KrebsOnSecurity. “The security of our guests’ information is very important to SONIC. We are working to understand the nature and scope of this issue, as we know how important this is to our guests. We immediately engaged third-party forensic experts and law enforcement when we heard from our processor. While law enforcement limits the information we can share, we will communicate additional information as we are able.”
Christi Woodworth, vice president of public relations at Sonic, said the investigation is still in its early stages, and the company does not yet know how many or which of its stores may be impacted.
The accounts apparently stolen from Sonic are part of a batch of cards that Joker’s Stash is calling “Firetigerrr,” and they are indexed by city, state and ZIP code. This geographic specificity allows potential buyers to purchase only cards that were stolen from Sonic customers who live near them, thus avoiding a common anti-fraud defense in which a financial institution might block out-of-state transactions from a known compromised card.
Malicious hackers typically steal credit card data from organizations that accept cards by hacking into point-of-sale systems remotely and seeding those systems with malicious software that can copy account data stored on a card’s magnetic stripe. Thieves can use that data to clone the cards and then use the counterfeits to buy high-priced merchandise from electronics stores and big box retailers.
Prices for the cards advertised in the Firetigerr batch are somewhat higher than for cards stolen in other breaches, likely because this batch is extremely fresh and unlikely to have been canceled by card-issuing banks yet.
Dumps available for sale on Joker’s Stash from the “FireTigerrr” base, which has been linked to a breach at Sonic Drive-In. Click image to enlarge.
Most of the cards range in price from $25 to $50, and the price is influenced by a number of factors, including: the type of card issued (Amex, Visa, MasterCard, etc); the card’s level (classic, standard, signature, platinum, etc.); whether the card is debit or credit; and the issuing bank.
I should note that it remains unclear whether Sonic is the only company whose customers’ cards are being sold in this particular batch of five million cards at Joker’s Stash. There are some (as yet unconfirmed) indications that perhaps Sonic customer cards are being mixed in with those stolen from other eatery brands that may be compromised by the same attackers.
The last known major card breach involving a large nationwide fast-food chain impacted more than a thousand Wendy’s locations and persisted for almost nine months after it was first disclosed here. The Wendy’s breach was extremely costly for card-issuing banks and credit unions, which were forced to continuously re-issue customer cards that kept getting re-compromised every time their customers went back to eat at another Wendy’s.
Part of the reason Wendy’s corporate offices had trouble getting a handle on the situation was that most of the breached locations were not corporate-owned but instead independently-owned franchises whose payment card systems were managed by third-party point-of-sale vendors.
According to Sonic’s Wikipedia page, roughly 90 percent of Sonic locations across America are franchised.
Dan Berger, president and CEO of the National Association of Federally Insured Credit Unions, said he’s not looking forward to the prospect of another Wendy’s-like fiasco.
“It’s going to be the financial institution that makes them whole, that pays off the charges or replaces money in the customer’s checking account, or reissues the cards, and all those costs fall back on the financial institutions,” Berger said. “These big card breaches are going to continue until there’s a national standard that holds retailers and merchants accountable.”
Financial institutions also bear some of the blame for the current state of affairs. The United States is embarrassingly the last of the G20 nations to make the shift to more secure chip-based cards, which are far more expensive and difficult for criminals to counterfeit. But many financial institutions still haven’t gotten around to replacing traditional magnetic stripe cards with chip-based cards. According to Visa, 58 percent of the more than 421 million Visa cards issued by U.S. financial institutions were chip-based as of March 2017.
Likewise, retailers that accept chip cards may present a less attractive target to hackers than those that don’t. In March 2017, Visa said the number of chip-enabled merchant locations in the country reached two million, representing 44 percent of stores that accept Visa.
Louise Matsakis, writing for Motherboard: Cloudflare, a major internet security firm, is on a mission to render distributed denial-of-service (DDoS) attacks useless. The company announced Monday that every customer -- including those who only use its free services -- will receive a new feature called Unmetered Mitigation, which protects against every DDoS attack, regardless of its size. Cloudflare believes the move is set to level the internet security playing field: Now every website will be able to fight back against DDoS attacks for free. "The standard practice in the industry for some time has been to charge more if you come under attack," Matthew Prince, the CEO of Cloudflare, told me on a phone call last week. Firms often "fire you as a customer if you're not sort of paying enough and you get a large attack," he explained. "That's kind of gross."
Read more of this story at Slashdot.
Every few months I see a post about diet, health, or unfortunately a coworker passing on this subreddit. I wanted to try to at least bring this up into the collective awareness, as it's something I've sacrificed in the past and am struggling to get back to a healthy amount on. The article is a bit lengthy but the gist is unless you're sleeping that 7-9 hours (some folks may need even more) you could be shortening your life span.
Do you have an end-of-day routine? Read a book? How about no screens after xPM? Anyone subscribe to the short afternoon naps (without anyone giving you endless grief at the office)?
In a Barnes & Noble company meeting CEO Len Riggio told shareholders that Nook was out of the technology business. According to Publisher’s Weekly, “Riggio explained that when e-book sales began exploding several years ago, B&N felt it had no choice but to enter the digital market. In retrospect, Riggio said, B&N didn’t have the culture or financing to compete… Read More
The other day, my Facebook friend Kevin McKeever called my attention to a news story (warning: autoplaying video) on the Toys R Us affair. It seems that at about the same time as I posted my previous article speculating on Toys R Us’s impending bankruptcy, Toys R Us went ahead and filed for it. The chain will be keeping its stores open as it reorganizes and tries to get rid of some of its load of debt.
But the interesting thing is that story listed almost two dozen other retailers that were closing or had closed some or many of their outlets. The list included Abercrombie & Fitch, Aerosoles, American Apparel, BCBG, Bebe, The Children’s Place, CVS, Guess, Gymboree, HH Gregg, JC Penny, The Limited, Macy’s, Michael Kors, Payless, Radio Shack, Rue21, Sears/Kmart, and Wet Seal. How many of those do you think can blame Amazon in whole or part for the straits in which they find themselves?
Barnes & Noble may not be too far behind, judging from Nate Hoffelder’s latest piece on chairman Len Riggio’s virtuoso fiddling while the chain burns down around him. Meanwhile, Mike Shatzkin has noticed that one big box chain is unexpectedly finding ways to fight back and stay relevant in the Amazon era. Best Buy—who I discussed last year as possibly providing a model for a B&N comeback—has leveraged its vast number of retail outlet stores into effective competition with Amazon’s speedy shipping. When an online order comes in, it ships the item from the nearest store that has it available in inventory. It’s undeniably a clever idea, not least for that it makes in-store inventory accessible to online orders even when the warehouses are out of stock.
Best Buy is also able to partner with competitors to Amazon in device manufacturing—Apple and Microsoft—and offer them retail space within its stores from which their representatives can assist people in person. But Amazon can partner with physical retailers, too—witness the recent announcement that Kohl’s outlets around Los Angeles and Chicago will accept, package, and ship back Amazon returns. And that’s quite apart from the way Amazon recently bought Whole Foods, and announced that Amazon Prime would also be Whole Foods’s new customer loyalty program.
It seems that physical retail and online stores like Amazon are in something of a collision course now. The next few years may well be interesting times in the Chinese sense. And to think that all this came about because of a company started out of Jeff Bezos’s parents’ garage to sell books over the Internet. You never can tell just what acorn is going to grow into an immense oak that threatens to cut off everyone else’s sunlight.
Bloomberg published a story this week citing three unnamed sources who told the publication that Equifax experienced a breach earlier this year which predated the intrusion that the big-three credit bureau announced on Sept. 7. To be clear, this earlier breach at Equifax is not a new finding and has been a matter of public record for months. Furthermore, it was first reported on this Web site in May 2017.
In my initial Sept. 7 story about the Equifax breach affecting more than 140 million Americans, I noted that this was hardly the first time Equifax or another major credit bureau has experienced a breach impacting a significant number of Americans.
On May 17, KrebsOnSecurity reported that fraudsters exploited lax security at Equifax’s TALX payroll division, which provides online payroll, HR and tax services.
That story was about how Equifax’s TALX division let customers who use the firm’s payroll management services authenticate to the service with little more than a 4-digit personal identification number (PIN).
Identity thieves who specialize in perpetrating tax refund fraud figured out that they could reset the PINs of payroll managers at various companies just by answering some multiple-guess questions — known as “knowledge-based authentication” or KBA questions — such as previous addresses and dates that past home or car loans were granted.
On Tuesday, Sept. 18, Bloomberg ran a piece with reporting from no fewer than five journalists there who relied on information provided by three anonymous sources. Those sources reportedly spoke in broad terms about an earlier breach at Equifax, and told the publication that these two incidents were thought to have been perpetrated by the same group of hackers.
The Bloomberg story did not name TALX. Only post-publication did Bloomberg reporters update the piece to include a statement from Equifax saying the breach was unrelated to the hack announced on Sept. 7, and that it had to do with a security incident involving a payroll-related service during the 2016 tax year.
I have thus far seen zero evidence that these two incidents are related. Equifax has said the unauthorized access to customers’ employee tax records (we’ll call this “the March breach” from here on) happened between April 17, 2016 and March 29, 2017.
The criminals responsible for unauthorized activity in the March breach were participating in an insidious but common form of cybercrime known as tax refund fraud, which involves filing phony tax refund requests with the IRS and state tax authorities using the personal information from identity theft victims.
My original report on the March breach was based on public breach disclosures that Equifax was required by law to file with several state attorneys general.
Because the TALX incident exposed the tax and payroll records of its customers’ employees, the victim customers were in turn required to notify their employees as well. That story referenced public breach disclosures from five companies that used TALX, including defense contractor giant Northrop Grumman; staffing firm Allegis Group; Saint-Gobain Corp.; Erickson Living; and the University of Louisville.
One more thing before I move on to the analysis. For more information on why KBA is a woefully ineffective method of stopping fraudsters, see this story from 2013 about how some of the biggest vendors of these KBA questions were all hacked by criminals running an identity theft service online.
Or, check out these stories about how tax refund fraudsters used weak KBA questions to steal personal data on hundreds of thousands of taxpayers directly from the Internal Revenue Service‘s own Web site. It’s probably worth mentioning that Equifax provided those KBA questions as well.
Over the past two weeks, KrebsOnSecurity has received an unusually large number of inquiries from reporters at major publications who were seeking background interviews so that they could get up to speed on Equifax’s spotty security history (sadly, Bloomberg was not among them).
These informational interviews — in which I agree to provide context and am asked to speak mainly on background — are not unusual; I sometimes field two or three of these requests a month, and very often more when time permits. And for the most part I am always happy to help fellow journalists make sure they get the facts straight before publishing them.
But I do find it slightly disturbing that there appear to be so many reporters on the tech and security beats who apparently lack basic knowledge about what these companies do and their roles in perpetuating — not fighting — identity theft.
It seems to me that some of the world’s most influential publications have for too long given Equifax and the rest of the credit reporting industry a free pass — perhaps because of the complexities involved in succinctly explaining the issues to consumers. Indeed, I would argue the mainstream media has largely failed to hold these companies’ feet to the fire over a pattern of lax security and a complete disregard for securing the very sensitive consumer data that drives their core businesses.
To be sure, Equifax has dug themselves into a giant public relations hole, and they just keep right on digging. On Sept. 8, I published a story equating Equifax’s breach response to a dumpster fire, noting that it could hardly have been more haphazard and ill-conceived.
But I couldn’t have been more wrong. Since then, Equifax’s response to this incident has been even more astonishingly poor.
On Tuesday, the official Equifax account on Twitter replied to a tweet requesting the Web address of the site that the company set up to give away its free one-year of credit monitoring service. That site is https://www.equifaxsecurity2017.com, but the company’s Twitter account told users to instead visit securityequifax2017[dot]com, which is currently blocked by multiple browsers as a phishing site.
Under intense public pressure from federal lawmakers and regulators, Equifax said that for 30 days it would waive the fee it charges for placing a security freeze on one’s credit file (for more on what a security freeze entails and why you and your family should be freezing their files, please see The Equifax Breach: What You Should Know).
Unfortunately, the free freeze offer from Equifax doesn’t mean much if consumers can’t actually request one via the company’s freeze page; I have lost count of how many comments have been left here by readers over the past week complaining of being unable to load the site, let alone successfully obtain a freeze. Instead, consumers have been told to submit the requests and freeze fees in writing and to include copies of identity documents to validate the requests.
Sen. Elizabeth Warren (D-Mass) recently introduced a measure that would force the bureaus to eliminate the freeze fees and to streamline the entire process. To my mind, that bill could not get passed soon enough.
Understand that each credit bureau has a legal right to charge up to $20 in some states to freeze a credit file, and in many states they are allowed to charge additional fees if consumers later wish to lift or temporarily thaw a freeze. This is especially rich given that credit bureaus earn roughly $1 every time a potential creditor (or identity thief) inquires about your creditworthiness, according to Avivah Litan, a fraud analyst with Gartner Inc.
In light of this, it’s difficult to view these freeze fees as anything other than a bid to discourage consumers from filing them.
The Web sites where consumers can go to file freezes at the other major bureaus — including TransUnion and Experian — have hardly fared any better since Equifax announced the breach on Sept. 7. Currently, if you attempt to freeze your credit file at TransUnion, the company’s site is relentless in trying to steer you away from a freeze and toward the company’s free “credit lock” service.
That service, called TrueIdentity, claims to allow consumers to lock or unlock their credit files for free as often as they like with the touch of a button. But readers who take the bait probably won’t notice or read the terms of service for TrueIdentity, which has the consumer agree to a class action waiver, a mandatory arbitration clause, and something called ‘targeted marketing’ from TransUnion and their myriad partners.
The agreement also states TransUnion may share the data with other companies:
“If you indicated to us when you registered, placed an order or updated your account that you were interested in receiving information about products and services provided by TransUnion Interactive and its marketing partners, or if you opted for the free membership option, your name and email address may be shared with a third party in order to present these offers to you. These entities are only allowed to use shared information for the intended purpose only and will be monitored in accordance with our security and confidentiality policies. In the event you indicate that you want to receive offers from TransUnion Interactive and its marketing partners, your information may be used to serve relevant ads to you when you visit the site and to send you targeted offers. For the avoidance of doubt, you understand that in order to receive the free membership, you must agree to receive targeted offers.“
TransUnion then encourages consumers who are persuaded to use the “free” service to subscribe to “premium” services for a monthly fee with a perpetual auto-renewal.
In short, TransUnion’s credit lock service (and a similarly named service from Experian) doesn’t prevent potential creditors from accessing your files, and these dubious services allow the credit bureaus to keep selling your credit history to lenders (or identity thieves) as they see fit.
As I wrote in a Sept. 11 Q&A about the Equifax breach, I take strong exception to the credit bureaus’ increasing use of the term “credit lock” to divert people away from freezes. Their motives for saddling consumers with even more confusing terminology are suspect, and I would not count on a credit lock to take the place of a credit freeze, regardless of what these companies claim (consider the source).
Experian’s freeze Web site has performed little better since Sept. 7. Several readers pinged KrebsOnSecurity via email and Twitter to complain that while Experian’s freeze site repeatedly returned error messages stating that the freeze did not go through, these readers’ credit cards were nonetheless charged $15 freeze fees multiple times.
If the above facts are not enough to make your blood boil, consider that Equifax and other bureaus have been lobbying lawmakers in Congress to pass legislation that would dramatically limit the ability of consumers to sue credit bureaus for sloppy security, and cap damages in related class action lawsuits to $500,000.
If ever there was an industry that deserved obsolescence or at least more regulation, it is the credit bureaus. If either of those outcomes are to become reality, it is going to take much more attentive and relentless coverage on the part of the world’s top news publications. That’s because there’s a lot at stake here for an industry that lobbies heavily (and successfully) against any new laws that may restrict their businesses.
Here’s hoping the media can get up to speed quickly on this vitally important topic, and help lead the debate over legal and regulatory changes that are sorely needed.
When I build a new computer one of the things I do as part of the setup is calibrate the color of the monitors. It’s actually pretty amazing how much better things look after just a few minutes of adjustments. It’s also nice to have the monitors synchronized, so if I move a window between […]
Building hardware is fun but tough. We worked on Pebble for a full four years before we launched on Kickstarter in 2012. We went on to sell over $230 million worth of Pebbles, or just over 2 million watches. While it wasn’t our top goal to sell to Fitbit last year, I’m grateful that they’re continuing to work on low-power, fun, hackable smartwatches. Startups in general are… Read More
New submitter Frobnicator writes: Four years ago, the W3C began standardizing Encrypted Media Extensions, or EME. Several organizations, including the EFF, have argued against DRM within web browsers. Earlier this year, after the W3C leadership officially recommended EME despite failing to reach consensus, the EFF filed the first-ever official appeal that the decision be formally polled for consensus. That appeal has been denied, and for the first time the W3C is endorsing a standard against the consensus of its members. In response, the EFF published their resignation from the body: "The W3C is a body that ostensibly operates on consensus. Nevertheless, as the coalition in support of a DRM compromise grew and grew -- and the large corporate members continued to reject any meaningful compromise -- the W3C leadership persisted in treating EME as topic that could be decided by one side of the debate. [...] Today, the W3C bequeaths an legally unauditable attack-surface to browsers used by billions of people. Effective today, EFF is resigning from the W3C." Jeff Jaffe, CEO of W3C said: "I know from my conversations that many people are not satisfied with the result. EME proponents wanted a faster decision with less drama. EME critics want a protective covenant. And there is reason to respect those who want a better result. But my personal reflection is that we took the appropriate time to have a respectful debate about a complex set of issues and provide a result that will improve the web for its users. My main hope, though, is that whatever point-of-view people have on the EME covenant issue, that they recognize the value of the W3C community and process in arriving at a decision for an inherently contentious issue. We are in our best light when we are facilitating the debate on important issues that face the web."
Read more of this story at Slashdot.
The Verge reports that Google has added local library ebook listings to its standard search interface when searching on books.
It works, too. When I search on a book I know that my local public library does carry in ebook format on my phone (that being Grumpy Old Rock Star, a frequently hilarious book of anecdotes by Yes keyboard player Rick Wakeman), it shows right up—along with about two screens’ worth of links to various stores that carry the book in different formats, an option to “follow” the book so that stories about it appear in my swipe-left-from-homescreen Google Feed, and a link to its listing on Google Books.
The results are also there on the desktop, though organized a little differently—instead of being in line with the results, the links are in a small sidebar at the right. It’s the sort of thing you might not immediately notice because your eye skips over it, assuming it to be an ad of some kind. (Which, in part, it is.)
When I click the link, it takes me to the book’s page on my local library’s Overdrive subdomain, with the option to borrow (or place a hold if all the library’s copies are already checked out).
In any case, it’s nice that Google’s directing people to local libraries along with all the local ebook stores. It’s good to remind people that a free alternative to ebook stores exists, even if they could have done a bit more to make that option stand out than just dropping it in as a fairly inobvious text link.
Every time you wash your fleece jacket or other synthetic clothing, microscopic synthetic fibres are released and end up in our food supply and drinking water. From a report: These microfibres are so small -- visible only under a microscope -- that they bypass municipal filtration systems and are consumed by fish and other marine life. A team of women from Waterloo, Ontario is looking to solve that problem. They've designed something that looks a lot like a dryer sheet for your laundry machine. You'd be able to drop this reusable sheet, called PolyGone, into the laundry machine with your dirty clothes. It attracts and traps the microfibres so they can be recycled. They presented their work at the annual AquaHacking conference at the University of Waterloo on Wednesday. "With these fibres entering our food system and ending up on our plates, we are essentially eating polluted laundry," said co-founder Lauren Smith at the conference. The event saw five teams, including hers, compete for tens of thousands of dollars and entry into several local incubators and accelerator centres. Smith has a Masters degree in sustainability management from UW, specializing in water.
Read more of this story at Slashdot.
NASA's Cassini probe has bid farewell to Titan and is now on its way to a fatal encounter with Saturn. At 12:04 pm PDT (3:04 pm EDT), the unmanned orbiter flew by Saturn's largest moon at an altitude of 73,974 mi (119,049 km), altering Cassini's trajectory so it will plunge into Saturn's atmosphere on September 15, marking the dramatic end to the spacecraft's 20-year mission... Continue Reading Cassini on course for destruction after final Titan flyby
An anonymous reader shares a report: One of the main reasons RSS is so beloved of news gatherers is that it catches everything a site publishes -- not just the articles that have proved popular with other users, not just the articles from today, not just the articles that happened to be tweeted out while you were actually staring at Twitter. Everything. In our age of information overload that might seem like a bad idea, but RSS also cuts out everything you don't want to hear about. You're in full control of what's in your feed and what isn't, so you don't get friends and colleagues throwing links into your feeds that you've got no interest in reading. Perhaps most importantly, you don't need to be constantly online and constantly refreshing your feeds to make sure you don't miss anything. It's like putting a recording schedule in place for the shows you know you definitely want to catch rather than flicking through the channels hoping you land on something interesting. There's no rush with RSS -- you don't miss out on a day's worth of news, or TV recaps, or game reviews if you're offline for 24 hours. It's all waiting for you when you get back. And if you're on holiday and the unread article count starts to get scarily high, just hit the mark all as read button and you're back to a clean slate.
Read more of this story at Slashdot.
In light of the Equifax breach that exposed personal information of over 143 million US citizens, a handful of senators have reintroduced legislation that would put more power in the hands of consumers when it comes to their credit reports. Senators...
The blame for the record-breaking cybersecurity breach that affects at least 143 million people falls on the open-source server framework, Apache Struts, according to an unsubstantiated report by equity research firm Baird. The firm's source, per one report, is believed to be Equifax. ZDNet reports: Apache Struts is a popular open-source software programming Model-View-Controller (MVC) framework for Java. It is not, as some headlines have had it, a vendor software program. It's also not proven that Struts was the source of the hole the hackers drove through. In fact, several headlines -- some of which have since been retracted -- all source a single quote by a non-technical analyst from an Equifax source. Not only is that troubling journalistically, it's problematic from a technical point of view. In case you haven't noticed, Equifax appears to be utterly and completely clueless about their own technology. Equifax's own data breach detector isn't just useless: it's untrustworthy. Adding insult to injury, the credit agency's advice and support site looks, at first glance, to be a bogus, phishing-type site: "equifaxsecurity2017.com." That domain name screams fake. And what does it ask for if you go there? The last six figures of your social security number and last name. In other words, exactly the kind of information a hacker might ask for. Equifax's technical expertise, it has been shown, is less than acceptable. Could the root cause of the hack be a Struts security hole? Two days before the Equifax breach was reported, ZDNet reported a new and significant Struts security problem. While many jumped on this as the security hole, Equifax admitted hackers had broken in between mid-May through July, long before the most recent Struts flaw was revealed. "It's possible that the hackers found the hole on their own, but zero-day exploits aren't that common," reports ZDNet. "It's far more likely that -- if the problem was indeed with Struts -- it was with a separate but equally serious security problem in Struts, first patched in March." The question then becomes: is it the fault of Struts developers or Equifax's developers, system admins, and their management? "The people who ran the code with a known 'total compromise of system integrity' should get the blame," reports ZDNet.
Read more of this story at Slashdot.
An anonymous reader quotes a report from Ars Technica: The manufacturer of EpiPen devices failed to address known malfunctions in its epinephrine auto-injectors even as hundreds of customer complaints rolled in and failures were linked to deaths, according to the Food and Drug Administration. The damning allegations came to light today when the FDA posted a warning letter it sent September 5 to the manufacturer, Meridian Medical Technologies, Inc. The company (which is owned by Pfizer) produces EpiPens for Mylan, which owns the devices and is notorious for dramatically raising prices by more than 400 percent in recent years. The auto-injectors are designed to be used during life-threatening allergic reactions to provide a quick shot of epinephrine. If they fail to fire, people experiencing a reaction can die or suffer serious illnesses. According to the FDA, that's exactly what happened for hundreds of customers. In the letter, the agency wrote: "In fact, your own data show that you received hundreds of complaints that your EpiPen products failed to operate during life-threatening emergencies, including some situations in which patients subsequently died." The agency goes on to lambast Meridian Medical for failing to investigate problems with the devices, recall bad batches, and follow-up on problems found. For instance, a customer made a complaint in April 2016 that an EpiPen failed. When Meridian disassembled the device, it found a deformed component that led to the problem -- the exact same defect it had found in February when another unit failed.
Read more of this story at Slashdot.
I was recently looking for a way to extract many attachments from a series of emails. I first had a look at the AttachmentExtractor thunderbird plugin, but it seems very old and not maintained anymore. So I've come up with another very simple solution that also works with any other mail client.
Just copy all the mails you want to extract attachments from to a single (temporary) mail folder, find out which file holds the mail folder and use ripmime on that file (ripmime is packaged for Debian). For my case, it looked like:
~ ripmime -i .icedove/XXXXXXX.default/Mail/pop.xxxx/tmp -d target-directory
Simple solution, but it saved me quite some time. Hope it helps !
A new bill is working its way through Congress that could be disastrous for free speech online. EFF is proud to be part of the coalition fighting back.
We all rely on online platforms to work, socialize, and learn. They’re where we go to make friends and share ideas with each other. But a bill in Congress could threaten these crucial online gathering places. The Stop Enabling Sex Traffickers Act (SESTA) might sound virtuous, but it’s the wrong solution to a serious problem.
The Electronic Frontier Foundation, R Street Institute, and over a dozen fellow public interest organizations are joining forces to launch a new website highlighting the problems of SESTA. Together, we’re trying to send a clear message to Congress: Don’t endanger our online communities. Stop SESTA.
SESTA would weaken 47 U.S.C. § 230 (commonly known as "CDA 230" or simply “Section 230”), one of the most important laws protecting free expression online. Section 230 protects Internet intermediaries—individuals, companies, and organizations that provide a platform for others to share speech and content over the Internet. This includes social networks like Facebook, video platforms like YouTube, news sites, blogs, and other websites that allow comments. Section 230 says that an intermediary cannot be held legally responsible for content created by others (with a few exceptions). And that’s a good thing: it’s why we have flourishing online communities where users can comment and interact with one another without waiting for a moderator to review every post.
SESTA would change all of that. It would shift more blame for users’ speech to the web platforms themselves. Under SESTA, web communities would likely become much more restrictive in how they patrol and monitor users’ contributions. Some of the most vulnerable platforms would be ones that operate on small budgets—sites like Wikipedia, the Internet Archive, and small WordPress blogs that play a crucial role in modern life but don’t have the massive budgets to defend themselves that Facebook and Twitter do.
Experts in human trafficking say that SESTA is aiming at the wrong target. Alexandra Levy, adjunct professor of human trafficking and human markets at Notre Dame Law School, writes, “Section 230 doesn’t cause lawlessness. Rather, it creates a space in which many things — including lawless behavior — come to light. And it’s in that light that multitudes of organizations and people have taken proactive steps to usher victims to safety and apprehend their abusers.”
Chinese researchers have discovered a vulnerability in voice assistants from Apple, Google, Amazon, Microsoft, Samsung, and Huawei. It affects every iPhone and Macbook running Siri, any Galaxy phone, any PC running Windows 10, and even Amazon's Alexa assistant. From a report: Using a technique called the DolphinAttack, a team from Zhejiang University translated typical vocal commands into ultrasonic frequencies that are too high for the human ear to hear, but perfectly decipherable by the microphones and software powering our always-on voice assistants. This relatively simple translation process lets them take control of gadgets with just a few words uttered in frequencies none of us can hear. The researchers didn't just activate basic commands like "Hey Siri" or "Okay Google," though. They could also tell an iPhone to "call 1234567890" or tell an iPad to FaceTime the number. They could force a Macbook or a Nexus 7 to open a malicious website. They could order an Amazon Echo to "open the backdoor." Even an Audi Q3 could have its navigation system redirected to a new location. "Inaudible voice commands question the common design assumption that adversaries may at most try to manipulate a [voice assistant] vocally and can be detected by an alert user," the research team writes in a paper just accepted to the ACM Conference on Computer and Communications Security.
Read more of this story at Slashdot.
Here’s another innovative use case Apple can mention when they announce the next Apple Watch: The device has reportedly played a key role in a baseball sign-stealing scheme run by the Boston Red Sox. Read More
Movie composer Mark Korven (The Cube and The Witch) wanted a musical instrument that made terrifying sounds so he asked his luthier friend, Tony Duggan-Smith, to make something that fit the bill. Behold the "Apprehension Engine."
From YouTube description:
What happens when a horror movie composer and a guitar maker join forces? They create the world’s most disturbing musical instrument. Affectionately known as "The Apprehension Engine," this one-of-a-kind instrument was commissioned by movie composer Mark Korven. Korven wanted to create spooky noises in a more acoustic and original way—but the right instrument didn't exist. So his friend, guitar maker Tony Duggan-Smith, went deep into his workshop and assembled what has to be the spookiest instrument on Earth.https://vimeo.com/184366394
An anonymous reader quotes a report from ZDNet: A critical security vulnerability in open-source server software enables hackers to easily take control of an affected server -- putting sensitive corporate data at risk. The vulnerability allows an attacker to remotely run code on servers that run applications using the REST plugin, built with Apache Struts, according to security researchers who discovered the vulnerability. All versions of Struts since 2008 are affected, said the researchers. Apache Struts is used across the Fortune 100 to provide web applications in Java, and it powers front- and back-end applications. Man Yue Mo, a security researcher at LGTM, who led the effort that led to the bug's discovery, said that Struts is used in many publicly accessible web applications, such as airline booking and internet banking systems. Mo said that all a hacker needs "is a web browser." "I can't stress enough how incredibly easy this is to exploit," said Bas van Schaik, product manager at Semmle, a company whose analytical software was used to discover the vulnerability. The report notes that "a source code fix was released some weeks prior, and Apache released a full patch on Tuesday to fix the vulnerability." It's now a waiting game for companies to patch their systems.
Read more of this story at Slashdot.
<+Meow> what's the alternative to html tables?
<+slew> html chairs
Comment: irc.p2p-network.net / #zomgwtfbbq
The huge cache of addresses was discovered on a server based in the Netherlands - and the researchers are trying to get it taken down
San Francisco, California—The Electronic Frontier Foundation (EFF) and the ACLU won a decision by the California Supreme Court that the license plate data of millions of law-abiding drivers, collected indiscriminately by police across the state, are not “investigative records” that law enforcement can keep secret.
California’s highest court ruled that the collection of license plate data isn’t targeted at any particular crime, so the records couldn’t be considered part of a police investigation.
“This is a big win for transparency in California,” attorney Peter Bibring, director of police practices at the ACLU of Southern California, which joined EFF in a lawsuit over the records. “The Supreme Court recognized that California’s sweeping public records exemption for police investigations doesn’t cover mass collection of data by police, like the automated scanning of license plates in this case. The Court also recognized that mere speculation by police on the harms that might result from releasing information can’t defeat the public’s strong interest in understanding how police surveillance impacts privacy."
The ruling sets a precedent that mass, indiscriminate data collection by the police can’t be withheld just because the information may contain some criminal data. This is important because police are increasingly using technology tools to surveil and collect data on citizens, whether it’s via body cameras, facial recognition cameras, or license plate readers.
The panel sent the case back to the trial court to determine whether the data can be made public in a redacted or anonymized form so drivers’ privacy is protected.
“The court recognized the huge privacy implications of this data collection,” said EFF Senior Staff Attorney Jennifer Lynch. “Location data like this, that’s collected on innocent drivers, reveals sensitive information about where they have been and when, whether that’s their home, their doctor’s office, or their house of worship.”
Automated License Plate Readers or ALPRs are high-speed cameras mounted on light poles and police cars that continuously scan the plates of every passing car. They collect not only the license plate number but also the time, date, and location of each plate scanned, along with a photograph of the vehicle and sometimes its occupants. The Los Angeles Police Department (LAPD) and the Los Angeles County Sheriff's Department (LASD) collect, on average, three million plate scans every week and have amassed a database of half a billion records.
EFF filed public records requests for a week’s worth of ALPR data from the agencies and, along with American Civil Liberties Union-SoCal, sued after both agencies refused to release the records.
EFF and ACLU SoCal asked the state supreme court to overturn a lower court ruling in the case that said all license plate data—collected indiscriminately and without suspicion that the vehicle or driver was involved in a crime—could be withheld from disclosure as “records of law enforcement investigations.”
EFF and the ACLU SoCal argued the ruling was tantamount to saying all drivers in Los Angeles are under criminal investigation at all times. The ruling would also have set a dangerous precedent, allowing law enforcement agencies to withhold from the public all kinds of information gathered on innocent Californians merely by claiming it was collected for investigative purposes.
EFF and ACLU SoCal will continue fighting for transparency and privacy as the trial court considers how to provide public access to the records so this highly intrusive data collection can be scrutinized and better understood.
For more on this case:
It’s almost too strange to believe, but a federal court ruled earlier this year that copyright can be used to control access to parts of our state and federal laws—forcing people to pay a fee or sign a contract to read and share them. On behalf of Public.Resource.Org, a nonprofit dedicated to improving public access to law, yesterday EFF challenged that ruling in the United States Court of Appeals for the District of Columbia Circuit.
Public.Resource.Org acquires and posts a wide variety of public documents, including regulations that have become law through what’s called “incorporation by reference.” That means that they were initially created at private standards organizations before being adopted into law by cities, states, and federal agencies. By posting these documents online, Public Resource wants to make these requirements more available to the public that must abide by them. But six standards development organizations sued Public Resource, claiming that they have copyright in the regulations, and that Public Resource shouldn’t be allowed to post them at all.
Laws and regulations incorporated by reference include some of our most important protections for health, safety, and fairness. They include fire safety rules for buildings, rules that ensure safe consumer products, rules for energy efficient buildings, and rules for designing fair and accurate standardized tests for students and employees. Once adopted by a legislature or agency, these rules are laws that can carry civil or criminal penalties. For example, a person was charged with manslaughter this year in connection with the deadly Ghost Ship fire in Oakland, California for violating a fire code that became law through incorporation by reference.
According to the district court decision issued in February, the standards development organizations that convene the committees that write these codes and standards can continue to decide who can print them, who can access and post them online, and the price and conditions of that access. It’s as if a lobbyist who submitted a draft bill to Congress could charge fees for access to that bill after Congress and the president pass it into law.
Today, while most laws and regulations in the U.S. can be searched and read on the Web, laws incorporated by reference are locked behind paywalls, or cannot be found online at all. Many are available only in expensive printed books, or in a single office in Washington, D.C. that requires an appointment on several weeks’ notice. Public Resource’s website was designed to fill this gap, which is why it was targeted in a lawsuit.
In our opening brief, EFF, along with co-counsel at Fenwick & West and attorney David Halperin, argued that giving private organizations the power to limit access violates the First Amendment’s guarantee of free speech, and the due process protections of the Fifth and Fourteenth Amendments and contradicts copyright law.
We’re asking the appeals court to fix these errors and uphold the rights of everyone to know the law, and to share it.
A half dozen technology and security companies — some of them competitors — issued the exact same press release today. This unusual level of cross-industry collaboration caps a successful effort to dismantle ‘WireX,’ an extraordinary new crime machine comprising tens of thousands of hacked Android mobile devices that was used this month to launch a series of massive cyber attacks.
Experts involved in the takedown warn that WireX marks the emergence of a new class of attack tools that are more challenging to defend against and thus require broader industry cooperation to defeat.
This graphic shows the rapid growth of the WireX botnet in the first three weeks of August 2017.
News of WireX’s emergence first surfaced August 2, 2017, when a modest collection of hacked Android devices was first spotted conducting some fairly small online attacks. Less than two weeks later, however, the number of infected Android devices enslaved by WireX had ballooned to the tens of thousands.
More worrisome was that those in control of the botnet were now wielding it to take down several large websites in the hospitality industry — pelting the targeted sites with so much junk traffic that the sites were no longer able to accommodate legitimate visitors.
Experts tracking the attacks soon zeroed in on the malware that powers WireX: Approximately 300 different mobile apps scattered across Google‘s Play store that were mimicking seemingly innocuous programs, including video players, ringtones or simple tools such as file managers.
“We identified approximately 300 apps associated with the issue, blocked them from the Play Store, and we’re in the process of removing them from all affected devices,” Google said in a written statement. “The researchers’ findings, combined with our own analysis, have enabled us to better protect Android users, everywhere.”
Perhaps to avoid raising suspicion, the tainted Play store applications all performed their basic stated functions. But those apps also bundled a small program that would launch quietly in the background and cause the infected mobile device to surreptitiously connect to an Internet server used by the malware’s creators to control the entire network of hacked devices. From there, the infected mobile device would await commands from the control server regarding which Websites to attack and how.
A sampling of the apps from Google’s Play store that were tainted with the WireX malware.
Experts involved in the takedown say it’s not clear exactly how many Android devices may have been infected with WireX, in part because only a fraction of the overall infected systems were able to attack a target at any given time. Devices that were powered off would not attack, but those that were turned on with the device’s screen locked could still carry on attacks in the background, they found.
“I know in the cases where we pulled data out of our platform for the people being targeted we saw 130,000 to 160,000 (unique Internet addresses) involved in the attack,” said Chad Seaman, a senior engineer at Akamai, a company that specializes in helping firms weather large DDoS attacks (Akamai protected KrebsOnSecurity from hundreds of attacks prior to the large Mirai assault last year).
The identical press release that Akamai and other firms involved in the WireX takedown agreed to publish says the botnet infected a minimum of 70,000 Android systems, but Seaman says that figure is conservative.
“Seventy thousand was a safe bet because this botnet makes it so that if you’re driving down the highway and your phone is busy attacking some website, there’s a chance your device could show up in the attack logs with three or four or even five different Internet addresses,” Seaman said in an interview with KrebsOnSecurity. “We saw attacks coming from infected devices in over 100 countries. It was coming from everywhere.”
Security experts from Akamai and other companies that participated in the WireX takedown say the basis for their collaboration was forged in the monstrous and unprecedented distributed denial-of-service (DDoS) attacks launched last year by Mirai, a malware strain that seeks out poorly-secured “Internet of things” (IoT) devices such as security cameras, digital video recorders and Internet routers.
The first and largest of the Mirai botnets was used in a giant attack last September that knocked this Web site offline for several days. Just a few days after that — when the source code that powers Mirai was published online for all the world to see and use — dozens of copycat Mirai botnets emerged. Several of those botnets were used to conduct massive DDoS attacks against a variety of targets, leading to widespread Internet outages for many top Internet destinations.
Allison Nixon, director of security research at New York City-based security firm Flashpoint, said the Mirai attacks were a wake-up call for the security industry and a rallying cry for more collaboration.
“When those really large Mirai DDoS botnets started showing up and taking down massive pieces of Internet infrastructure, that caused massive interruptions in service for people that normally don’t deal with DDoS attacks,” Nixon said. “It sparked a lot of collaboration. Different players in the industry started to take notice, and a bunch of us realized that we needed to deal with this thing because if we didn’t it would just keep getting bigger and rampaging around.”
Mirai was notable not only for the unprecedented size of the attacks it could launch but also for its ability to spread rapidly to new machines. But for all its sheer firepower, Mirai is not a particularly sophisticated attack platform. Well, not in comparison to WireX, that is.
According to the group’s research, the WireX botnet likely began its existence as a distributed method for conducting “click fraud,” a pernicious form of online advertising fraud that will cost publishers and businesses an estimated $16 billion this year, according to recent estimates. Multiple antivirus tools currently detect the WireX malware as a known click fraud malware variant.
The researchers believe that at some point the click-fraud botnet was repurposed to conduct DDoS attacks. While DDoS botnets powered by Android devices are extremely unusual (if not unprecedented at this scale), it is the botnet’s ability to generate what appears to be regular Internet traffic from mobile browsers that strikes fear in the heart of experts who specialize in defending companies from large-scale DDoS attacks.
DDoS defenders often rely on developing custom “filters” or “signatures” that can help them separate DDoS attack traffic from legitimate Web browser traffic destined for a targeted site. But experts say WireX has the capability to make that process much harder.
That’s because WireX includes its own so-called “headless” Web browser that can do everything a real, user-driven browser can do, except without actually displaying the browser to the user of the infected system.
Also, Wirex can encrypt the attack traffic using SSL — the same technology that typically protects the security of a browser session when an Android user visits a Web site which requires the submission of sensitive data. This adds a layer of obfuscation to the attack traffic, because the defender needs to decrypt incoming data packets before being able to tell whether the traffic inside matches a malicious attack traffic signature.
Translation: It can be far more difficult and time-consuming than usual for defenders to tell WireX traffic apart from clicks generated by legitimate Internet users trying to browse to a targeted site.
“These are pretty miserable and painful attacks to mitigate, and it was these kinds of advanced functionalities that made this threat stick out like a sore thumb,” Akamai’s Seaman said.
Traditionally, many companies that found themselves on the receiving end of a large DDoS attack sought to conceal this fact from the public — perhaps out of fear that customers or users might conclude the attack succeeded because of some security failure on the part of the victim.
But the stigma associated with being hit with a large DDoS is starting to fade, Flashpoint’s Nixon said, if for no other reason than it is becoming far more difficult for victims to conceal such attacks from public knowledge.
“Many companies, including Flashpoint, have built out different capabilities in order to see when a third party is being DDoS’d,” Nixon said. “Even though I work at a company that doesn’t do DDoS mitigation, we can still get visibility when a third-party is getting attacked. Also, network operators and ISPs have a strong interest in not having their networks abused for DDoS, and many of them have built capabilities to know when their networks are passing DDoS traffic.”
Just as multiple nation states now employ a variety of techniques and technologies to keep tabs on nation states that might conduct underground tests of highly destructive nuclear weapons, a great deal more organizations are now actively looking for signs of large-scale DDoS attacks, Seaman added.
“The people operating those satellites and seismograph sensors to detect nuclear [detonations] can tell you how big it was and maybe what kind of bomb it was, but they probably won’t be able to tell you right away who launched it,” he said. “It’s only when we take many of these reports together in the aggregate that we can get a much better sense of what’s really going on. It’s a good example of none of us being as smart as all of us.”
According to the WireX industry consortium, the smartest step that organizations can take when under a DDoS attack is to talk to their security vendor(s) and make it clear that they are open to sharing detailed metrics related to the attack.
“With this information, those of us who are empowered to dismantle these schemes can learn much more about them than would otherwise be possible,” the report notes. “There is no shame in asking for help. Not only is there no shame, but in most cases it is impossible to hide the fact that you are under a DDoS attack. A number of research efforts have the ability to detect the existence of DDoS attacks happening globally against third parties no matter how much those parties want to keep the issue quiet. There are few benefits to being secretive and numerous benefits to being forthcoming.”
Identical copies of the WireX report and Appendix are available at the following links:
Game studios normally bend over backwards to discourage pirates and keep titles off of any piracy sites, but don't tell that to Acid Wizard. When the studio saw that a young player asked for a refund for its horror game Darkwood out of a fear that hi...
by Jason Chan
This summer marks three years of releasing open source software for the Netflix Cloud Security team. It’s been a busy three years — our most recent release marks 15 open source projects — so we figured a roundup and recap would be useful.
Penetration testing tools, vulnerabilities, and offensive security techniques have dominated security conferences and security-related open source for some time. However, in recent years, more individuals and organizations have been publishing “blue team” and defensive security tools and talks. We’re thrilled that the security industry has become more supportive of sharing these tools and techniques, and we’re more than happy to participate through the release of open source.
Our security-related OSS tends to be reflective of the unique Netflix culture. Many of the tools we’ve released are aimed at facilitating security in high-velocity and distributed software development organizations. Automation is a big part of our approach, and we seek to keep our members, employees, data, and systems safe and secure while enabling innovation. For our team, scale, speed, and integration with the culture are the keys to enabling the business to move fast.
Without further ado, here’s a look back at the OSS we’ve released.
We’ve enjoyed contributing to the OSS security community and have learned a lot from the feedback and collaboration. It’s always instructive to see how software evolves over its lifecycle and to see how others extend it in novel and creative ways. And going forward, we’ll look to make more use of our Skunkworks project to share projects that are experimental or that we don’t necessarily envision supporting long term. We have a few projects we’re considering open sourcing in the near future — if you’re interested, keep an eye on this space, our GitHub site, and @NetflixOSS on Twitter, and check out our YouTube channel for more talks from our team.
A Brief History of Open Source from the Netflix Cloud Security Team was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Original release date: August 21, 2017
On October 11, 2017, the Internet Corporation for Assigned Names and Numbers (ICANN) will be changing the Root Zone Key Signing Key (KSK) used in the domain name system (DNS) Security Extensions (DNSSEC) protocol.
DNSSEC is a set of DNS protocol extensions used to digitally sign DNS information, which is an important part of preventing domain name hijacking. Updating the DNSSEC KSK is a crucial security step, similar to updating a PKI Root Certificate. Maintaining an up-to-date Root KSK as a trust anchor is essential to ensuring DNSSEC-validating DNS resolvers continue to function after the rollover. While DNSSEC validation is mandatory for federal agencies, it is not required of the private sector. Systems of organizations that do not use DNSSEC validation will be unaffected by the rollover.
Some people might assume that Gen Con would only be of interest to gamers—but that couldn’t be farther from the truth. Wednesday was the third Gen Con Trade Day I’ve attended, in which librarians and educators give presentations to other librarians and educators on how they make use of games and gaming in their curricula or programming. (There are also panels focused on retailers, but I didn’t attend any of those.) Here are my reports on Trade Day 2015 and 2016.
I missed out on the first hour due to arriving late, but I was still able to catch five out of the six hours of programming. Here are my reports on the panels I attended.
This panel was a roundtable discussion focusing on various games that could be useful in a classroom setting. Teachers and others swapped stories, tips, and suggestions about games that could be useful in teaching, and how they could be used.
One teacher shared the story of a 5th grade teacher doing a unit on advertising, and some kids had trouble coming up with fictitious products to make up ads about. But it turned out there was a game called Snake Oil, which involved coming up with products to sell, that helped those students get past that problem.
Games that the moderators or other attendees shared included Rory’s Story Cubes (which I previously covered for TeleRead), Dixit, Word on the Street, Word Teasers, Love Letter, Ticket to Ride, Qwirkle, In a Pickle, Verbal Volley, and Apples to Apples. I offered up The Storymatic (which I also covered in that piece about Story Cubes) and Storium.
This event was put on by panelists from the Chicago Public Library, using a problem-solving technique called “Design Thinking” that the library has developed in partnership with IDEO and the Bill & Melinda Gates Foundation.
The library walked through how it used the process to come up with a way of implementing a gaming program that would appeal to adults as well as children. The process involves a three-step cycle of inspiration, ideation, and iteration. As the Wikipedia article on Design Thinking explains:
Inspiration is the initial problem or opportunity that leads you to the finding of the solution; ideation is the core of the development process where the idea is better defined; and implementation is the final step where the solution comes in contact with the outer world. Projects may loop back through inspiration, ideation, and implementation more than once as the team refines its ideas and explores new directions.
The presentation went step by step through how the library applied these stages, in more detail than I really have time to lay out here. Suffice it to say that their experimentation and analysis led them to have a kiosk they could set up at local events and expositions at which they would offer a set number of games for people to check out and play. The number of games varied from just a few at most events, to many at specific events like a movies-in-the-park program in which a large number of people would show up hours early and then need ways to pass the time.
The process seemed interesting, though the presentation seemed more about how to use design thinking than about gaming specifically. Still, librarians with an interest might want to check out the link in the first paragraph of this section; the site offers a toolkit that librarians can study for “5-8 hours a week for the next six weeks,” depending on how much time they have available.
This panel was put on by staff from the Ukiah Mendocino County Public Library from Ukiah, California. This seems to be a pretty small library as libraries go—small enough that it doesn’t even have a web site of its own, but has to make do with a section of the Mendocino County Government web site and a Facebook page.
But this panel was proof that even small libraries can come up with great ideas for interacting with their community. The librarians discussed how an arts-and-crafts day for teenagers to make their own padded LARP (Live-Action Role-Playing) “boffa” swords turned into a “LARPspedition” game day in which area businesses participated in a “treasure hunt,” followed by a LARP combat event at the library itself and an ice cream social.
The librarians went over how they ran the program step by step, including crafting the swords, coming up with clues to direct people to participating businesses, and—not least importantly—getting in touch with their local police department to make sure that people running around the community with padded “LARP swords” would be all right that day.
This panel, and the event it described, are a great example of the way librarians can build relations with the community through means that go beyond just recommending good books. Perhaps more libraries should consider hosting similar game-related events.
This panel had more to do with game design than with education or libraries, but it still taught some lessons that are worth remembering. This panel was presented by a pair of game designers from Thorny Games, discussing how to navigate the potentially thorny (pun not intended) problem of basing games on cultures not one’s own.
They used as an example their game Sign, which was based on the birth of the unique Nicaraguan Sign Language in the 1970s. This language, they explained, was created spontaneously when hundreds of deaf Nicaraguan children were brought together in special schools for the purpose of trying to teach them to read lips. Instead of learning to read lips, they effectively negotiated their own language to use to talk to each other. This was such a fascinating idea that the game developers wanted to base a game on it.
But basing a game on people of a different culture can be tricky. Developers have to ask themselves if they’re the right persons to be making such a game, and whether it might be better to work with someone else who is in a better position to represent that other culture. They should also reach out to members of that culture and involve them in the process (as the game developers did for Sign by reaching out to members of the Nicaraguan deaf community).
It’s also important to remember that your game makes statements in not just what it says, but also in how it plays. During the design process for Sign, the designers realized at one stage that they were making a potentially destructive statement in terms of how the game was played—implying that sign languages in general were more primitive and simplistic than spoken languages. They had to make some corrections along the way to make sure they weren’t sending that kind of message.
But these problems and challenges aren’t a reason not to try to make these games. They made the point that games are an excellent way to help develop and promote empathy—to get people to put themselves into the positions of people of different cultures and backgrounds, in ways that simply reading or watching a story never could. I think that’s an important message to consider—all the more so given the events of the last week or so.
If you want to see how the developers put these theories into practice, Sign is currently downloadable as a PDF from the Thorny Games web site.
This panel really didn’t have a lot of relevance to libraries, education, or even game retail. It was a professional investment counselor giving a set of tips to remember for overseeing one’s investments in 401Ks, pension plans, the stock market, etc. It seemed like reasonably sound advice, but it’s not exactly topical enough to go over in detail here.
Perhaps the more important thing to take away from this panel is that Gen Con—and Gen Con Trade Day—is a place where you can find a lot more useful advice than you might expect, on more topics than you might expect. If you’re an educator, librarian, or gaming retail industry professional, make the time to arrive a day early for Gen Con so you can take in Gen Con Trade Day first.
If you found this post worth reading and want to kick in a buck or two to the author, click here.
"Don't Be a Sucker" is as timely now as it was back in 1947:
Don't Be a Sucker! is a short educational film produced by the U.S. War Department in 1943 and re-released in 1947. The film depicts the rise of Nazism in Germany and warns Americans against repeating the mistakes of intolerance made in Nazi Germany. It emphasizes that Americans will lose their country if they let themselves be turned into "suckers" by the forces of fanaticism and hatred. The film was made to make the case for the desegregation of the United States armed forces by simply revealing the connection between prejudice and fascism.
This film is not propaganda. To the contrary, it teaches how to recognize and reject propaganda, as was used by the Nazis to promote to bigotry and intimidation. It shows how prejudice can be used to divide the population to gain power. Far more significantly, it then shows how such tactics can be defanged by friendly persuasion; that protection of liberty is a unifying and practical way to live peacefully.