Like the Spanish Inquisition, nobody expects cascading failures. Here's how Google handles them.
This excerpt is a particularly interesting and comprehensive chapter—Chapter 22 - Addressing Cascading Failures—from Google's awesome book on Site Reliability Engineering. Worth reading if it hasn't been on your radar. And it's free!
Written by Mike Ulrich
If at first you don't succeed, back off exponentially."
Dan Sandler, Google Software Engineer
Why do people always forget that you need to add a little jitter?"
Ade Oshineye, Google Developer Advocate
A cascading failure is a failure that grows over time as a result of positive feedback.107 It can occur when a portion of an overall system fails, increasing the probability that other portions of the system fail. For example, a single replica for a service can fail due to overload, increasing load on remaining replicas and increasing their probability of failing, causing a domino effect that takes down all the replicas for a service.
Over the past 28 years, the Hubble Space Telescope has inspired a generation of astronomers with insanely dramatic views of the universe, but it's hardly done blowing our minds. NASA has unveiled a new fly-through video of the Lagoon Nebula. Located...
Thank you to Neil Gaiman for Norse Mythology. I was just held hostage by two young girls (8/10) until I finished the book. Watching my youngest reenact the battles of Ragnarok was magical. [Published articles]
I really like Neil Gaiman as an author and look forward to all of his new books. I ended up getting Norse Mythology because I thought it might an interesting read and my girls saw it and wanted in. We ended up reading a chapter/story every night and they couldn't get enough of it. Last night we came to the ending story of Ragnarok and they were totally entranced. I try and do voices and act out things when I read to them and my youngest took it a step farther by acting out the battle in her room while I was reading. Experiences like this make being a parent special.
I found that the writing was really easy to get into. It felt like I was an old storyteller relating long lost legends to a new generation (which I guess I was). I really hope that he decides to tackle other mythologies because I would love to be able to share more old stories with my kids.
As of last week, Superman celebrated his 80th year as the world’s most recognizable superhero. Tons of conversations have been happening about favorite stories and moments in the Kal-El canon, and it’s worth thinking about the ones that came out since dawn of the new millennium.
MEDantex, a Kansas-based company that provides medical transcription services for hospitals, clinics and private physicians, took down its customer Web portal last week after being notified by KrebsOnSecurity that it was leaking sensitive patient medical records — apparently for thousands of physicians.
On Friday, KrebsOnSecurity learned that the portion of MEDantex’s site which was supposed to be a password-protected portal physicians could use to upload audio-recorded notes about their patients was instead completely open to the Internet.
What’s more, numerous online tools intended for use by MEDantex employees were exposed to anyone with a Web browser, including pages that allowed visitors to add or delete users, and to search for patient records by physician or patient name. No authentication was required to access any of these pages.
This exposed administrative page from MEDantex’s site granted anyone complete access to physician files, as well as the ability to add and delete authorized users.
Several MEDantex portal pages left exposed to the Web suggest that the company recently was the victim of WhiteRose, a strain of ransomware that encrypts a victim’s files unless and until a ransom demand is paid — usually in the form of some virtual currency such as bitcoin.
Contacted by KrebsOnSecurity, MEDantex founder and chief executive Sreeram Pydah confirmed that the Wichita, Kansas based transcription firm recently rebuilt its online servers after suffering a ransomware infestation. Pydah said the MEDantex portal was taken down for nearly two weeks, and that it appears the glitch exposing patient records to the Web was somehow incorporated into that rebuild.
“There was some ransomware injection [into the site], and we rebuilt it,” Pydah said, just minutes before disabling the portal (which remains down as of this publication). “I don’t know how they left the documents in the open like that. We’re going to take the site down and try to figure out how this happened.”
It’s unclear exactly how many patient records were left exposed on MEDantex’s site. But one of the main exposed directories was named “/documents/userdoc,” and it included more than 2,300 physicians listed alphabetically by first initial and last name. Drilling down into each of these directories revealed a varying number of patient records — displayed and downloadable as Microsoft Word documents and/or raw audio files.
Although many of the exposed documents appear to be quite recent, some of the records dated as far back as 2007. It’s also unclear how long the data was accessible, but this Google cache of the MEDantex physician portal seems to indicate it was wide open on April 10, 2018.
Among the clients listed on MEDantex’s site include New York University Medical Center; San Francisco Multi-Specialty Medical Group; Jackson Hospital in Montgomery Ala.; Allen County Hospital in Iola, Kan; Green Clinic Surgical Hospital in Ruston, La.; Trillium Specialty Hospital in Mesa and Sun City, Ariz.; Cooper University Hospital in Camden, N.J.; Sunrise Medical Group in Miami; the Wichita Clinic in Wichita, Kan.; the Kansas Spine Center; the Kansas Orthopedic Center; and Foundation Surgical Hospitals nationwide. MEDantex’s site states these are just some of the healthcare organizations partnering with the company for transcription services.
Unfortunately, the incident at MEDantex is far from an anomaly. A study of data breaches released this month by Verizon Enterprise found that nearly a quarter of all breaches documented by the company in 2017 involved healthcare organizations.
Verizon says ransomware attacks account for 85 percent of all malware in healthcare breaches last year, and that healthcare is the only industry in which the threat from the inside is greater than that from outside.
“Human error is a major contributor to those stats,” the report concluded.
Source: Verizon Business 2018 Data Breach Investigations Report.
According to a story at BleepingComputer, a security news and help forum that specializes in covering ransomware outbreaks, WhiteRose was first spotted about a month ago. BleepingComputer founder Lawrence Abrams says it’s not clear how this ransomware is being distributed, but that reports indicate it is being manually installed by hacking into Remote Desktop services.
Fortunately for WhiteRose victims, this particular strain of ransomware is decryptable without the need to pay the ransom.
“The good news is this ransomware appears to be decryptable by Michael Gillespie,” Abrams wrote. “So if you become infected with WhiteRose, do not pay the ransom, and instead post a request for help in our WhiteRose Support & Help topic.”
Ransomware victims may also be able to find assistance in unlocking data without paying from nomoreransom.org.
KrebsOnSecurity would like to thank India-based cybersecurity startup Banbreach for the heads up about this incident.
An anonymous reader quotes a report from Ars Technica: A newly published "exploit chain" for Nvidia Tegra X1-based systems seems to describe an apparently unpatchable method for running arbitrary code on all currently available Nintendo Switch consoles. Hardware hacker Katherine Temkin and the hacking team at ReSwitched released an extensive outline of what they're calling the Fusee Gelee coldboot vulnerability earlier today, alongside a proof-of-concept payload that can be used on the Switch. "Fusee Gelee isn't a perfect, 'holy grail' exploit -- though in some cases it can be pretty damned close," Temkin writes in an accompanying FAQ. The exploit, as outlined, makes use of a vulnerability inherent in the Tegra X1's USB recovery mode, circumventing the lock-out operations that would usually protect the chip's crucial bootROM. By sending a bad "length" argument to an improperly coded USB control procedure at the right point, the user can force the system to "request up to 65,535 bytes per control request." That data easily overflows a crucial direct memory access (DMA) buffer in the bootROM, in turn allowing data to be copied into the protected application stack and giving the attacker the ability to run arbitrary code. The exploit can't be fixed via a downloadable patch because the flawed bootROM can't be modified once the Tegra chip leaves the factory. As Temkin writes, "unfortunately, access to the fuses needed to configure the device's ipatches was blocked when the ODM_PRODUCTION fuse was burned, so no bootROM update is possible. It is suggested that consumers be made aware of the situation so they can move to other devices, where possible." Ars notes that Nintendo may however be able to detect "hacked" systems when they sign on to Nintendo's servers. "The company could then ban those systems from using the Switch's online functions."
Read more of this story at Slashdot.
Some people, when they look up at the sky and see a cloud, think “dog” or “fluffy.” And some people think “it’s a waning cumulus with a feathered edge suggesting a pressure system from the north ending in an updraft, which would probably cause turbulence. Also looks a bit like a dog.” Clearly one of those people created these complex, beautiful renderings of weather data.
The idea behind this project at ETH Zürich, led by Markus Gross, is that different visualizations of detailed weather data may be highly useful in different fields. He and his colleagues have been working on a huge set of such data and finding ways of accurately representing it with an eye to empowering meteorologists from the TV station to the research lab.
“The scientific value of our visualisation lies in the fact that we make something visible that was impossible to see with the existing tools,” explained undergraduate researcher Noël Rimensberger in an ETHZ news release. Representing weather “in a relatively simple, comprehensible way” is its own reward, really.
The data in question are all from the evening of April 26, 2013, the date chosen for a large-scale meteorology project in which multiple institutions collaborated. The team created different ways to visualize different bodies of data.
For instance, if you were looking down on a whole county, what’s the use of seeing every little ripple of a cloud system? What you need is larger trends and ways of picking out important data points, such as areas likely to develop precipitation, or where the beginnings of movement suggest a cold front moving in.[gallery ids="1627090,1627089,1627088,1627093,1627087"]
On the other hand, such macro data has no place when you’re looking at the formation of clouds over a single locality, or why a storm seems to have struck with unnatural fierceness there.
And again, what if you’re a small aircraft pilot? A little rain and clouds you might not mind, but what if you want to see patterns of turbulence in the country and how they move as the day wears on? Or if you’re investigating what led to a crash at a particular location and time?
These visualizations show how a large set of data can be interpreted and displayed in many ways and to many purposes.
Tobias Günther, Rimensberger’s supervisor on the project, pointed out that the algorithms they used to interpret the reams of data and create these simulations are far too slow at present, but they’re working on improving them. Still, some could be used if time isn’t of the essence.
You can find a link to download the full paper, created for an ETH Zürich visualization contest, at the university’s website.
TIL how the UK military recruiter mistook "cryptogamist" (algae expert) for "cryptogramist" and sent Geoffrey Tandy to join the code breakers; he wasn't so useful until captured German papers arrived water-logged; with his expertise they salvaged them, cracked the code, and hastened the victory. [Published articles]
NASA has released incredible new images of the Lagoon Nebula taken by the Hubble space telescope, in honor of its 28th anniversary and presumably 4/20. Dude... have you ever like... thought about how small we are... and how big the universe is...?
"Those who designed our digital world are aghast at what they created," argues a new article in New York Magazine titled "The Internet Apologizes". Today, the most dire warnings are coming from the heart of Silicon Valley itself. The man who oversaw the creation of the original iPhone believes the device he helped build is too addictive. The inventor of the World Wide Web fears his creation is being "weaponized." Even Sean Parker, Facebook's first president, has blasted social media as a dangerous form of psychological manipulation. "God only knows what it's doing to our children's brains," he lamented recently... The internet's original sin, as these programmers and investors and CEOs make clear, was its business model. To keep the internet free -- while becoming richer, faster, than anyone in history -- the technological elite needed something to attract billions of users to the ads they were selling. And that something, it turns out, was outrage. As Jaron Lanier, a pioneer in virtual reality, points out, anger is the emotion most effective at driving "engagement" -- which also makes it, in a market for attention, the most profitable one. By creating a self-perpetuating loop of shock and recrimination, social media further polarized what had already seemed, during the Obama years, an impossibly and irredeemably polarized country... What we're left with are increasingly divided populations of resentful users, now joined in their collective outrage by Silicon Valley visionaries no longer in control of the platforms they built. Lanier adds that "despite all the warnings, we just walked right into it and created mass behavior-modification regimes out of our digital networks." Sean Parker, the first president of Facebook, is even quoted as saying that a social-validation feedback loop is "exactly the kind of thing that a hacker like myself would come up with, because you're exploiting a vulnerability in human psychology. The inventors, creators -- it's me, it's Mark [Zuckerberg], it's Kevin Systrom on Instagram, it's all of these people -- understood this consciously. And we did it anyway." The article includes quotes from Richard Stallman, arguing that data privacy isn't the problem. "The problem is that these companies are collecting data about you, period. We shouldn't let them do that. The data that is collected will be abused..." He later adds that "We need a law that requires every system to be designed in a way that achieves its basic goal with the least possible collection of data... No company is so important that its existence justifies setting up a police state." The article proposes hypothetical solutions. "Could a subscription model reorient the internet's incentives, valuing user experience over ad-driven outrage? Could smart regulations provide greater data security? Or should we break up these new monopolies entirely in the hope that fostering more competition would give consumers more options?" Some argue that the Communications Decency Act of 1996 shields internet companies from all consequences for bad actors -- de-incentivizing the need to address them -- and Marc Benioff, CEO of Salesforce, thinks the solution is new legislation. "The government is going to have to be involved. You do it exactly the same way you regulated the cigarette industry. Technology has addictive qualities that we have to address, and product designers are working to make those products more addictive. We need to rein that back."
Read more of this story at Slashdot.
In a news bulletin, University of California, Berkeley announces that its "Foundations of Data Science" course is "being offered free online this spring for the first time through the campus's online education hub, edX." From the report: The course -- Data 8X (Foundations of Data Science) -- covers everything from testing hypotheses, applying statistical inferences, visualizing distributions and drawing conclusions, all while coding in Python and using real-world data sets. One lesson might take economic data from different countries over the years to track global economic growth. The next might use a data set of cell samples to create a classification algorithm that can diagnose breast cancer. (Learn more from a video on the Berkeley data science website.) The online program is based on the Foundations of Data Science course that Berkeley launched on campus in 2015 and now has more than 1,000 students enrolling every semester. The Foundations of Data Science edX Professional Certificate program is a sequence of three five-week courses taught by three winners of Berkeley's top teaching honor, the Distinguished Teaching Award: DeNero, statistics professor Ani Adhikari and computer science professor David Wagner. The first of the three parts has already started (9 a.m. on April 2), but enrollment will remain open after the course begins. Furthermore, anyone in the world can enroll for free but those who want to earn the certificate will need to pay.
Read more of this story at Slashdot.
Slashdot reader silverdirk writes: Compiled languages have long provided access to the OpenGL API, and even most scripting languages have had OpenGL bindings for a decade or more. But, one significant language missing from the list is our old friend/nemesis Bash. But worry no longer! Now you can create your dazzling 3D visuals right from the comfort of your command line! "You'll need a system with both Bash and OpenGL support to experience it firsthand," explains software engineer Michael Conrad, who created the first version 13 years ago as "the sixth in a series of 'Abuse of Technology' projects," after "having my technical sensibilities offended that someone had written a real-time video game in Perl. "Back then, my primary language was C++, and I was studying OpenGL for video game purposes. I declared to my friends that the only thing worse would be if it had been 3D and written in Bash. Having said the idea out loud, it kept prodding me, and I eventually decided to give it a try to one-up the 'awfulness'..."
Read more of this story at Slashdot.
New submitter xonen shares a report from NPR: For decades, scientists have thought that black holes should sink to the center of galaxies and accumulate there. But scientists had no proof that these exotic objects had actually gathered together in the center of the Milky Way. Isolated black holes are almost impossible to detect, but black holes that have a companion -- an orbiting star -- interact with that star in ways that allow the pair to be spotted by telltale X-ray emissions. The team searched for those signals in a region stretching about three light-years out from our galaxy's central supermassive black hole. What they found there: a dozen black holes paired up with stars. Finding so many in such a small region is significant, because until now scientists have found evidence of only about five dozen black holes throughout the entire galaxy. What they've found should help theorists make better predictions about how many cosmic smashups might occur and generate detectable gravitational waves. The study has been published in the journal Nature.
Read more of this story at Slashdot.
Researchers have created a wearable device that can read people's minds when they use an internal voice, allowing them to control devices and ask queries without speaking. From a report: The device, called AlterEgo, can transcribe words that wearers verbalise internally but do not say out loud, using electrodes attached to the skin. "Our idea was: could we have a computing platform that's more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?" said Arnav Kapur, who led the development of the system at MIT's Media Lab. Kapur describes the headset as an "intelligence-augmentation" or IA device, and was presented at the Association for Computing Machinery's Intelligent User Interface conference in Tokyo. It is worn around the jaw and chin, clipped over the top of the ear to hold it in place. Four electrodes under the white plastic device make contact with the skin and pick up the subtle neuromuscular signals that are triggered when a person verbalises internally. When someone says words inside their head, artificial intelligence within the device can match particular signals to particular words, feeding them into a computer.
Read more of this story at Slashdot.
Unconscious and implicit biases can show up every step of the hiring the process. That’s why Greenhouse, a recruiting and applicant tracking program, has partnered with diversity and inclusion consulting firm Paradigm to bake in inclusive hiring practices throughout the entire process.
Greenhouse Inclusion is designed to help companies employ inclusive hiring practices throughout a job candidate’s entire interaction with the company, from application to interview to job. The platform’s features include automatic resume anonymization, real-time interventions to reduce bias, safeguards to ensure every candidate is evaluated consistently, as well as candidate data and analytics.
“We considered interventions that would have impact across the hiring funnel, influencing who applies to a role in the first place, how hiring decisions are made, and the inclusivity of the process end-to-end,” Paradigm CEO Joelle Emerson said in an email to TechCrunch. “We then collaborated with Greenhouse to determine how software could make those inclusive best practices scalable. For example, we know that when interviewers evaluate candidates in a more structured way – assessing only relevant qualifications and considering those same qualifications for all candidates – they’ll make more objective and equitable decisions. This product includes interventions designed to prompt greater structure and reflection from interviewers.”[gallery ids="1615814,1615815,1615816,1615817,1615818"]
The platform also features something called “nudges,” which act as tips to remind hiring managers to draft inclusive job descriptions, prompt employees to take diversity into account when making referrals and remind interviewers to slow down and reflect while evaluating a candidate.
Over the years, Greenhouse has become a popular recruiting tool for tech companies like Airbnb, Pinterest, Twilio, Lyft, SurveyMonkey, DocuSign, Evernote and Vimeo. That means there’s some potential for the implementation of Greenhouse Inclusion to result in actual change, in terms of the demographic makeup of the tech industry’s workforce. So far, Pinterest is the only company that has signed on to use Greenhouse Inclusion.
“We’re the first to bring out a holistic technological solution to address bias in hiring,” Greenhouse Inclusion product manager Alex Powell said in a blog post. “We’re the de facto leaders and we need to set the right path for this.”
Security guru Bruce Schneier warns that "thousands of companies" are spying on us and manipulating us for profit. An anonymous reader quotes his article on CNN: Harvard Business School professor Shoshana Zuboff calls it "surveillance capitalism." And as creepy as Facebook is turning out to be, the entire industry is far creepier. It has existed in secret far too long, and it's up to lawmakers to force these companies into the public spotlight, where we can all decide if this is how we want society to operate and -- if not -- what to do about it... Surveillance capitalism drives much of the internet. It's behind most of the "free" services, and many of the paid ones as well. Its goal is psychological manipulation, in the form of personalized advertising to persuade you to buy something or do something, like vote for a candidate. And while the individualized profile-driven manipulation exposed by Cambridge Analytica feels abhorrent, it's really no different from what every company wants in the end... Surveillance capitalism is deeply embedded in our increasingly computerized society, and if the extent of it came to light there would be broad demands for limits and regulation. But because this industry can largely operate in secret, only occasionally exposed after a data breach or investigative report, we remain mostly ignorant of its reach... Regulation is the only answer.The first step to any regulation is transparency. Who has our data? Is it accurate? What are they doing with it? Who are they selling it to? How are they securing it? Can we delete it...? The market can put pressure on these companies to reduce their spying on us, but it can only do that if we force the industry out of its secret shadows. The article also insists that "None of this is new," pointing out that companies like Facebook and Google offer their free services in exchange for your data. But he also notes that there are now already 2,500 to 4,000 data brokers just in the U.S., including Equifax.
Read more of this story at Slashdot.
An anonymous reader quotes CNN: The U.S. Department of Justice is asking the Supreme Court to abandon its case against Microsoft over international data privacy. A new law signed by President Donald Trump last week answers the legal question at the heart of Microsoft's case, the DOJ says. So the case "is now moot," the department said in a court filing posted Saturday. Microsoft's legal battle began in 2013, when it refused to hand over emails stored on a server in Ireland to US officials who were investigating drug trafficking. Microsoft argued at the time that sharing data stored abroad could violate international treaties and policies, and there was no law on the books to provide any clarity. That changed with the The Cloud Act, which was tucked into the spending bill that Trump signed March 23. The act establishes a legal pathway for the United States to form agreements with other nations that make it easier for law enforcement to collect data stored on foreign soil... Microsoft cheered the new law, saying the Cloud Act provides the legal clarity the company sought. The ACLU's legislative counsel argues that the new act hurts privacy and human rights, "at a time when human rights activists, dissidents and journalists around the world face unprecedented attacks." "Would even a well-intentioned technology company, particularly a small one, have the expertise and resources to competently assess the risk that a foreign order may pose to a particular human rights activist?"
Read more of this story at Slashdot.
A surprise discovery announced a month ago suggested that the early universe looked very different than previously believed. Initial theories that the discrepancy was due to dark matter have come under fire.
Aardman Animations is one of only a few studios keeping the art of stop-motion animation alive. But for its latest feature, Early Man, even Aardman’s talented animators took advantage of modern filmmaking tricks to help bring an entire stadium full of Bronze Age soccer fans to life.
The internet is made for many wonderful things, but also, it’s made for yelling at each other loudly about beloved animated movies. So why not add some more fuel to the fire with your own Disney vs. Pixar March Madness bracket?
It seems obvious that the way a robot moves would affect how people interact with it, and whether they consider it easy or safe to be near. But what poses and movement types specifically are reassuring or alarming? Disney Research looked into a few of the possibilities of how a robot might approach a simple interaction with a nearby human.
The study had people picking up a baton with a magnet at one end and passing it to a robotic arm, which would automatically move to collect the baton with its own magnet.
But the researchers threw variations into the mix to see how they affected the forces involved, how people moved and what they felt about the interaction. The robot had two types each of three phases: movement into position, grasping the object and removing it from the person’s hand.
For movement, it either started hanging down inertly and sprung up to move into position, or it began already partly raised. The latter condition was found to make people accommodate the robot more, putting the baton into a more natural position for it to grab. Makes sense — when you pass something to a friend, it helps if they already have their hand out.
Grasping was done either quickly or more deliberately. In the first condition the robot’s arm attaches the magnet as soon as it’s in position; in the second, it pushes up against the baton and repositions it for a more natural way to pull out. There wasn’t a big emotional difference here, but opposing forces were much less in the second grasp type, perhaps meaning it was easier.
Once attached, the robot retracted the baton either slowly or more quickly. Humans preferred the former, saying that the latter felt as if the object was being yanked out of their hands.
The results won’t blow anyone’s mind, but they’re an important contribution to the fast-growing field of human-robot interaction. Soon there ought to be best practices for this kind of thing for when we’re interacting with robots that, say, clear the table at a restaurant or hand workers items in a factory. That way they’ll be operating with the knowledge that they won’t be producing any unnecessary anxiety in nearby humans.
A side effect of all this was that the people in the experiment gradually seemed to learn to predict the robot’s movements and accommodate them — as you might expect. But it’s a good sign that even over a handful of interactions a person can start building a rapport with a machine they’ve never worked with before.
Long-time Slashdot reader Lauren Weinstein argues that fixing Facebook may be impossible because "Facebook's entire ecosystem is predicated on encouraging the manipulation of its users by third parties who posses the skills and financial resources to leverage Facebook's model. These are not aberrations at Facebook -- they are exactly how Facebook was designed to operate." Meanwhile one fund manager is already predicting that sooner or later every social media platform "is going to become MySpace," adding that "Nobody young uses Facebook," and that the backlash over Cambridge Analytica "quickens the demise." But Slashdot reader silvergeek asks, "is there a safe, secure, and ethical alternative?" to which tepples suggests "the so-called IndieWeb stack using the h-entry microformat." He also suggests Diaspora, with an anonymous Diaspora user adding that "My family uses a server I put up to trade photos and posts... Ultimately more people need to start hosting family servers to help us get off the cloud craze... NethServer is a pretty decent CentOS based option." Meanwhile Slashdot user Locke2005 shared a Washington Post profile of Mastodon, "a Twitter-like social network that has had a massive spike in sign-ups this week." Mastodon's code is open-source, meaning anybody can inspect its design. It's distributed, meaning that it doesn't run in some data center controlled by corporate executives but instead is run by its own users who set up independent servers. And its development costs are paid for by online donations, rather than through the marketing of users' personal information... Rooted in the idea that it doesn't benefit consumers to depend on centralized commercial platforms sucking up users' personal information, these entrepreneurs believe they can restore a bit of the magic from the Internet's earlier days -- back when everything was open and interoperable, not siloed and commercialized. The article also interviews the founders of Blockstack, a blockchain-based marketplace for apps where all user data remains local and encrypted. "There's no company in the middle that's hosting all the data," they tell the Post. "We're going back to the world where it's like the old-school Microsoft Word -- where your interactions are yours, they're local and nobody's tracking them." On Medium, Mastodon founder Eugene Rochko also acknowledges Scuttlebutt and Hubzilla, ending his post with a message to all social media users: "To make an impact, we must act." Lauren Weinstein believes Google has already created an alternative to Facebook's "sick ecosystem": Google Plus. "There are no ads on Google+. Nobody can buy their way into your feed or pay Google for priority. Google doesn't micromanage what you see. Google doesn't sell your personal information to any third parties..." And most importantly, "There's much less of an emphasis on hanging around with those high school nitwits whom you despised anyway, and much more a focus on meeting new persons from around the world for intelligent discussions... G+ posts more typically are about 'us' -- and tend to be far more interesting as a result." (Even Linus Torvalds is already reviewing gadgets there.) Wired has also compiled their own list of alternatives to every Facebook service. But what are Slashdot's readers doing for their social media fix? Leave your own thoughts and suggestions in the comments. Is there a good alternative to Facebook?
Read more of this story at Slashdot.
Hear that? It’s almost as if thousands of spooks and hackers suddenly cried out at once… The Internet Engineers Task Force has just unanimously approved a security framework that will make encrypted connections on the web faster and more resistant to snooping.
It’s called Transport Layer Security version 1.3, and while it’s not a big flashy event, it very much is the kind of iterative improvement that keeps the web working in the face of malicious actors everywhere. The IETF is a body of engineers from all over the world who collaborate on standards like this — and their approval of TLS 1.3 has been long in coming, more than four years and 28 drafts.
That’s because the internet is a delicate machine and changes to its fundamental parts — such as how a client and server establish a secure, encrypted connection — must be made very, very carefully.
Without going too deep into the technical details (I’d be lost if I tried), TLS 1.3 makes a few prominent changes that should keep you safe.
The whole standard is 155 pages long, and really only other engineers will want to dig in. But it’s available here if you’d like to peruse it or go into detail on one of the new features.
It doesn’t magically take effect, of course — but the IETF approval is a big step towards the standard being adopted by big companies, web services, and other, higher-level standards. You probably won’t even notice when it does come into play, but that’s how it’s supposed to happen. Just be sure to thank a network engineer or cryptographer next time you see one.
With all of us connected to our phones day and night, it's pretty easy to respond to work requests after official office hours are over. European countries like France have passed laws allowing employees to ignore employers after hours, giving citize...
Is it time to end your Facebook life? At the very least, it's time to check Facebook privacy settings/audit apps/turn off API sharing.
A new, longer-term study of video game play from the Max Planck Institute for Human Development and Germany's University Clinic Hamburg-Eppendorf recently published in Molecular Psychiatry found that adults showed "no significant changes" on a wide variety of behavioral measures after two straight months of daily violent game play. From a report: To correct for the "priming" effects inherent in these other studies, researchers had 90 adult participants play either Grand Theft Auto V or The Sims 3 for at least 30 minutes every day over eight weeks (a control group played no games during the testing period). The adults chosen, who ranged from 18 to 45 years old, reported little to no video game play in the previous six months and were screened for pre-existing psychological problems before the tests. The participants were subjected to a wide battery of 52 established questionnaires intended to measure "aggression, sexist attitudes, empathy, and interpersonal competencies, impulsivity-related constructs (such as sensation seeking, boredom proneness, risk taking, delay discounting), mental health (depressivity, anxiety) as well as executive control functions." The tests were administered immediately before and immediately after the two-month gameplay period and also two months afterward, in order to measure potential continuing effects. Over 208 separate comparisons (52 tests; violent vs. non-violent and control groups; pre- vs. post- and two-months-later tests), only three subjects showed a statistically significant effect of the violent gameplay at a 95 percent confidence level.
Read more of this story at Slashdot.
After the death — no, let’s not mince words — murder of Google Reader, I tried out a dozen or so other RSS readers to see if I could get a similar experience. Of all the ones I tested, I was very surprised to find that Digg Reader was the best of them all, for my purposes anyway.
It was simple, clean, compact, kept up to date, had no weird fluff, no “recommendations” or “trending articles” unless you accidentally visited Digg itself by accident, and since I started using it it has never had any downtime that I’ve noticed.
I’ve come to rely on it as much as I did Google Reader in the past few years, so I am sad to see that the service is shutting down in two weeks. You’ll still be able to export your feeds for a while afterwards, though.
Digg itself will live on, but the Reader portion is being retired. I understand why — RSS readers aren’t exactly glitzy or profitable, they’re more a public service than anything. At some point a company has to reckon with that and decide whether they want to continue subsidizing a tool used by relics like me instead of whatever most people use, probably Twitter or something.
Well, Digg Reader, you were a great tool and I’m sad to leave you. Guess it’s time for me to test out another dozen RSS readers, or maybe bite the bullet and host my own.
John Oliver’s main skill is that he’s usually pretty good at explaining complex and boring topics in short TV segments. And this week’s episode of Last Week Tonight is particularly relevant to the tech industry as Oliver tackled cryptocurrencies. In just 25 minutes, the Last Week Tonight team put together a decent introduction to bitcoin, blockchain, ICOs and cryptocurrencies.… Read More
I’ve always been fascinated by very, very old things. Fossils. Prehistoric artifacts. Cave paintings and petroglyphs. It’s like reaching out across the expanse of time and touching something that was alive long before what we call history—i.e., our written past.
One of my favorite Twitter feeds is The Ice Age, curated by Jamie Woodward. It’s a succession of images and links and bits of fact, always interesting, and sometimes weirdly apposite to my life in general and this series in particular.
Last September, Prof. Woodward posted an image that made me sit up sharply.
It’s made of mammoth ivory, and is around 35,000 years old. Someone in the feed referred to it as a “stallion,” but it’s not. The neck is too refined, and the shape of the belly is quite round. It is, perhaps, a mare, and perhaps a pregnant one.
And she looks just like this.
That’s a two-year-old filly, photographed in 2001. Many millennia after the ivory horse was carved. But the same arch of the neck. The same curve of the barrel. The same sense of power and presence. But living, and contemporary.
She’s still out there. Older now, of course. Gone as white as ivory, because she’s a grey, and grey horses turn white as they mature. But still all Mare.
More recently—just a couple of weeks ago—Prof. Woodward posted another striking image (credited to Heinrich Wendel). It’s much younger, between ten and twenty thousand years old, and it was drawn on a cave wall, by firelight, for reasons we don’t know and can only guess. It considerably predates the domestication of the horse—as far as we know—and yet the artist, whoever they were, had really looked at the horse. They had the proportions right. They showed the shaggy hairs around the jaw—maybe winter coat; maybe horses then were just that hairy, like some modern ponies. The ears are up, the nostrils a little flared, the eyes dark and deep. There’s a hint of human expression in the eyebrows and the smile—but horses can be very expressive, and their eyebrows do lift and their lips can turn up.
This artist paid attention. The horse looks out at us across the centuries, and it’s a real horse. It’s alive, as the artist remembered it; because it’s rather unlikely the horse could have been brought into the cave to be drawn from life. Horses do not like confined spaces at the best of times, and horses in that age had never been bred for submission to humans.
That happened much later. Maybe around 6500 BCE, maybe a millennium later. Herds for milk and meat came first; driving and riding, centuries after that, somewhere around 3500 BCE. With the wheel came the chariot, and horses and domesticated donkeys to pull it. And somewhere in there, some enterprising person managed to get a horse to accept being ridden, and then figured out steering and brakes and some form of padding and eventually a saddle and very eventually stirrups.
What also happened, with domestication, was breeding for specific traits. Now that we can learn so much from DNA, there are some genuine surprises popping out in the news. One that got a lot of traction last spring was a study of Scythian horses—a larger group of stallions from one grave dated around 300 BCE, two about 400 years older, and one mare from around 2100 BCE.
The study expected to find in the largest grave what they would find in a more modern excavation: that all the stallions were closely related. But in fact only two were. There was no inbreeding, and no sign of the kind of breeding that’s been done in recent centuries, focusing on a very few stallions and excluding the rest from the gene pool. “Keep the best, geld the rest.”
The Scythians went in another direction—from the evidence, allowing horses to breed as they would in the wild, with stallions driving off their sons and not breeding their mothers or sisters or daughters, but leaving those to secondary stallions. No inbreeding. No line-breeding. No emphasis on specific individuals.
And yet they appear to have bred for specific traits. Sturdy forelegs. Speed—the same gene that gives modern Thoroughbreds their advantage in a race. A gene for retaining water, which the study speculates has to do with breeding mares for milk production. And color: the horses were cream, spotted, black, bay, chestnut.
As a sometime breeder of horses, whose own breed is tiny (fewer than 5000 in the world), I salute these breeders. Our own genetics are surprisingly diverse for the small size of the gene pool, with eight available stallion lines and twenty-plus mare lines and the strong discouragement of inbreeding and line-breeding, but we’re still constrained by something that happened somewhere between ancient Scythia and the modern age, and that is the adage I quoted above, the belief in restricting male lines to a few quality individuals. Quality being determined by whatever the breeders wanted it to be, all too often as specific as color, head shape, foot size, or a particular type of musculature.
And that way lies trouble. Narrowing the gene pool increases the likelihood of genetic problems. If a single stallion is in vogue and everyone breeds to him because of what he offers—speed, color, muscles, whatever—then that cuts out numerous other genetic combinations. And if the stallion’s appeal stems from a particular set of genes, or even a specific mutation, the consequences can be devastating.
That happened to the American Quarter Horse a couple of decades ago. A stallion named Impressive was a huge show winner. The trait in which he excelled was extreme, body-builder musculature. It did not become apparent until significant numbers of mares had been bred to him and then those offspring had been bred to each other, that those huge bulging muscles were the result of a mutation that caused the horse’s muscles to twitch constantly—a disease called Equine Hyperkalemic Periodic Paralysis, or HYPP, also called Impressive Syndrome, because every case traced to that one horse. The only way to be sure a horse does not succumb to the disease is to determine by genetic testing that the horse does not have a copy of the gene, and to exclude all horses with the gene from the gene pool.
Huge mess. Huge, huge mess, with millions of dollars invested in show winners who won because of their big muscles, but who might become incapacitated or die at any time. The fight to mandate testing, and then to bar HYPP-positive horses from being bred, was still going on the last I looked. Because of one stallion, and a breeding ethos that focused narrowly on a single exceptional individual.
Somehow the Scythians knew to avoid this, or else simply did not conceive of breeding related horses to each other. It’s not what horses do in their natural state. How that changed, and when that changed, is still being studied. I’ll be very interested to see the results when they’re made public.
Przewalski’s horse; photo by Ludovic Hirlimann
There’s more going on with this ongoing study of ancient horse lines, and more coming out, with more surprises still. One of the widely accepted beliefs of equine science has been that while nearly all current “wild” horses are in fact feral, descended from domesticated animals, one wild subspecies still remains: the Przewalski’s horse. Domestic horses, the theory goes, are descended from the Botai horses of central Asia—in or around what is now Kazakhstan.
But genetic analysis has demonstrated that this is almost completely not true. Modern horses share no more than 3% of their genetic material with the Botai horses—but the Przewalski’s horse is a descendant of these horses. Which means that there are no horses left from any wild population. All living horses are the descendants of domesticated horses, though we don’t know (yet) where the majority of them come from.
What’s even more startling is that the Botai horses carried the gene for leopard spotting, now seen in the American Appaloosa and the European Knabstrupper. Their feral descendants lost it, probably (as the article says) because it comes along with a gene for night blindness. It appears the Botai people selected for it.
Now we’re left to wonder where all our modern horses came from, and how and when the wild populations died out. As for why, I’m afraid we can guess: either incorporated into domestic herds or hunted into extinction—as seems to have happened to the latter in North America. Large, nomadic animals are all too likely to get in the way of human expansion, and an animal as useful as the horse would have to either assimilate or vanish.
What all this means for us now is that we’re starting to appreciate the value of diversity and the need for broader gene pools in our domestic animals. We’ve concentrated them too much, to the detriment of our animals’ health and functionality. Where breeders were encouraged to inbreed and line-breed, many are now being advised to outcross as much as possible. That’s not very much, unfortunately. But every little bit helps.
Top image: Lascaux cave paintings; photo by Patrick Janicek.
Judith Tarr is a lifelong horse person. She supports her habit by writing works of fantasy and science fiction as well as historical novels, many of which have been published as ebooks by Book View Cafe. She’s even written a primer for writers who want to write about horses: Writing Horses: The Fine Art of Getting It Right. Her most recent short novel, Dragons in the Earth, features a herd of magical horses, and her space opera, Forgotten Suns, features both terrestrial horses and an alien horselike species (and space whales!). She lives near Tucson, Arizona with a herd of Lipizzans, a clowder of cats, and a blue-eyed dog.