VPNFilter is a sophisticated, multi-stage malware package, part of the new breed of boot-persistent malware (software that can survive a reboot); it targets home routers and network-attached storage devices, then steals passwords and logins that traverse the network and exfiltrates it to the creators' servers. (more…)
Tuesday is the planned launch for a SpaceX Falcon 9 carrying two payloads to orbit — and this launch will be an especially interesting one. A set of five communications satellites for Iridium need to get to almost 500 miles up, but a NASA mission has to pop out at the 300 mile mark. What to do? Just make a pit stop, it turns out.
Now, of course it’s not a literal stop — the thing will be going thousands of miles per hour. But from the reference frame of the rocket itself, it’s not too different from pulling over to let a friend out before hitting the gas again and rolling on to the next destination.
What will happen is this: The rocket’s first stage will take it up out of the atmosphere, then separate and hopefully land safely. The second stage will then ignite to take its payload up to orbit. Usually at this point it’ll burn until it reaches the altitude and attitude required, then deploy the payload. But in this case it has a bit more work to do.
When the rocket has reached 305 miles up, it will dip its nose 30 degrees down and roll a bit to put NASA’s twin GRACE-FO satellites in position. One has to point toward Earth, the other toward space. Once in position, the separation system will send the two birds out, one in each direction, at a speed of about a foot per second.
The one on the Earth side will be put into a slightly slower and lower orbit than the one on the space side, and after they’ve spread out to a distance of 137 miles, the lower satellite will boost itself upwards and synchronize with the other.
That will take a few days, but just 10 minutes after it sends the GRACE-FOs on their way, the Falcon-9 will resume its journey, reigniting the second stage engine and bringing the Iridium NEXT satellites to about 485 miles up. There the engine will cut off again and the rest of the payload will be delivered.
So what are these high-maintenance satellites that have to have their own special deployments?
The Iridium NEXT satellites are the latest in a series of deployments commissioned by the space-based communications company; they’re five of a planned 75 that will replace its old constellation and provide worldwide coverage. The last launch, in late March, went off without a hitch. This is the only launch with just five birds to deploy; the previous and pending launches all had 10 satellites each.
GRACE-FO is a “follow-on” mission (hence the FO) to GRACE, the Gravity Recovery and Climate Experiment, and a collaboration with the German Research Centre for Geosciences. GRACE launched in 2002, and for 15 years it monitored the presence and changes in the fresh water on (and below) the Earth’s surface. This has been hugely beneficial for climate scientists and others, and the follow-on will continue where the original left off.
The original mission worked by detecting tiny changes in the difference between the two satellites as they passed over various features — these tiny changes indicate how mass is distributed below them and can be used to measure the presence of water. GRACE-FO adds a laser ranging system that may improve the precision of this process by an order of magnitude.
Interestingly, the actual rocket that will be doing this complicated maneuver is the same one that launched the ill-fated Zuma satellite in January. That payload apparently failed to deploy itself properly after separating from the second stage, though because it was a classified mission no one has publicly stated exactly what went wrong — except to confirm that SpaceX wasn’t to blame.
The launch will take place at Vandenberg Air Force Base at 12:47 tomorrow afternoon Pacific time. If it’s aborted, there’s another chance on Wednesday. Keep an eye out for the link to the live stream of this unique launch!
Former Commissioner Mignon Clyburn, who left the agency this month, has taken aim at it in an interview, saying the agency has abandoned its mission to safeguard consumers and protect their privacy and speech. From her interview with ArsTechnica: "I'm an old Trekkie," Clyburn told Ars in a phone interview, while comparing the FCC's responsibility to the Star Trek fictional universe's Prime Directive. "I go back to my core, my prime directive of putting consumers first." If the FCC doesn't do all it can to bring affordable communications services to everyone in the US, "our mission will not be realized," she said. The FCC's top priority, as set out by the Communications Act, is to make sure all Americans have "affordable, efficient, and effective" access to communications services, Clyburn said. But too often, the FCC's Republican majority led by Chairman Ajit Pai is prioritizing the desires of corporations over consumers, Clyburn said. "I don't believe it's accidental that we are called regulators," she said. "Some people at the federal level try to shy away from that title. I embrace it." Clyburn said that deregulation isn't bad in markets with robust competition, because competition itself can protect consumers. But "that is just not the case" in broadband, she said. "Let's just face it, [Internet service providers] are last-mile monopolies," she told Ars. "In an ideal world, we wouldn't need regulation. We don't live in an ideal world, all markets are not competitive, and when that is the case, that is why agencies like the FCC were constructed. We are here as a substitute for competition." Broadband regulators should strike a balance that protects consumers and promotes investment from large and small companies, she said. "If you don't regulate appropriately, things go too far one way or the other, and we either have prices that are too high or an insufficient amount of resources or applications or services to meet the needs of Americans," Clyburn said.
Read more of this story at Slashdot.
Christopher Ingraham, writing for The Washington Post: China, Russia and other authoritarian countries inflate their official GDP figures by anywhere from 15 to 30 percent in a given year, according to a new analysis of a quarter-century of satellite data. The working paper, by Luis R. Martinez of the University of Chicago, also found that authoritarian regimes are especially likely to artificially boost their gross domestic product numbers in the years before elections, and that the differences in GDP reporting between authoritarian and non-authoritarian countries can't be explained by structural factors, such as urbanization, composition of the economy or access to electricity. Martinez's findings are derived from a novel data source: satellite imagery that tracks changes in the level of nighttime lighting within and between countries over time.
Read more of this story at Slashdot.
This is why the internet was invented.
Watch this squirrel come in like a wrecking ball as it tries to launch itself onto a backyard bird feeder.
Mastering the various buttons, thumbsticks, triggers and touchpads of video game controllers is hard enough, but it can be near impossible for people with limited mobility and forms of disability. To open up gaming to more people who might normally miss out, Microsoft has unveiled the Xbox Adaptive Controller, a versatile new device that can be connected to a range of different accessories to cater to different players' specific needs... Continue Reading Xbox Adaptive Controller lends a helping hand for gamers with disabilities
TIL Nachos are named after their inventor, Ignacio "Nacho" Anaya. The dish was originally called "Nacho's especiales," and eventually the apostrophe disappeared and it was shortened to just "nachos." [Published articles]
Russian cybersecurity software maker Kaspersky Labs has announced it will be moving core infrastructure processes to Zurich, Switzerland, as part of a shift announced last year to try to win back customer trust.
It also said it’s arranging for the process to be independently supervised by a Switzerland-based third party qualified to conduct technical software reviews.
“By the end of 2019, Kaspersky Lab will have established a data center in Zurich and in this facility will store and process all information for users in Europe, North America, Singapore, Australia, Japan and South Korea, with more countries to follow,” it writes in a press release.
“Kaspersky Lab will relocate to Zurich its ‘software build conveyer’ — a set of programming tools used to assemble ready to use software out of source code. Before the end of 2018, Kaspersky Lab products and threat detection rule databases (AV databases) will start to be assembled and signed with a digital signature in Switzerland, before being distributed to the endpoints of customers worldwide.
“The relocation will ensure that all newly assembled software can be verified by an independent organization, and show that software builds and updates received by customers match the source code provided for audit.”
In October the company unveiled what it dubbed a “comprehensive transparency initiative” as it battled suspicion that its antivirus software had been hacked or penetrated by the Russian government and used as a route for scooping up US intelligence.
Being a trusted global cybersecurity firm and operating core processes out of Russia where authorities might be able to lean on your company for access has essentially become untenable as geopolitical concern over the Kremlin’s online activities has spiked in recent years.
Yesterday the Dutch government became the latest public sector customer to announce a move away from Kaspersky products (via Reuters) — saying it was doing so as a “precautionary measure”, and advising companies operating vital services to do the same.
Responding to the Dutch government’s decision, Kaspersky described it as “very disappointing”, saying its transparency initiative is “designed precisely to address any fears that people or organisations may have”.
“We are implementing these measures first and foremost in response to the evolving, ultra-connected global landscape and the challenges the cyber-world is currently facing,” the company adds in a detailed Q&A about the measures. “This is not exclusive to Kaspersky Lab, and we believe other organizations will in future also choose to adapt to these trends. Having said that, the overall aim of these measures is transparency, verified and proven, which means that anyone with concerns will now be able to see the integrity and trustworthiness of our solutions.”
The core processes that Kaspersky will move from Russia to Switzerland over this year and next — include customer data storage and processing (for “most regions”); and software assembly, including threat detection updates.
As a result of the shift it says it will be setting up “hundreds” of servers in Switzerland and establishing a new data center there, as well as drawing on facilities of a number of local data center providers.
Kaspersky is not exiting Russia entirely, though, and products for the Russian market will continue to be developed and distributed out of Moscow.
“In Switzerland we will be creating the ‘worldwide’ (ww) version of our products and AV bases. All modules for the ww-version will be compiled there. We will continue to use the current software build conveyer in Moscow for creating products and AV bases for the Russian market,” it writes, claiming it is retaining a software build conveyor in Russia to “simplify local certification”.
Data of customers from Latin American and Asia (with the exception of Japan, South Korea and Singapore) will also continue to be stored and processed in Russia — but Kaspersky says the list of countries for which data will be processed and stored in Switzerland will be “further extended, adding: “The current list is an initial one… and we are also considering the relocation of further data processing to other planned Transparency Centers, when these are opened.”
Whether retaining a presence and infrastructure in Russia will work against Kaspersky’s wider efforts to win back trust globally remains to be seen.
In the Q&A it claims: “There will be no difference between Switzerland and Russia in terms of data processing. In both regions we will adhere to our fundamental principle of respecting and protecting people’s privacy, and we will use a uniform approach to processing users’ data, with strict policies applied.”
However other pre-emptive responses in the document underline the trust challenge it is likely to face — such as a question asking what kind of data stored in Switzerland that will be sent or available to staff in its Moscow HQ.
On this it writes: “All data processed by Kaspersky Lab products located in regions excluding Russia, CIS, Latin America, Asian and African countries, will be stored in Switzerland. By default only aggregated statistics data will be sent to R&D in Moscow. However, Kaspersky Lab experts from HQ and other locations around the world will be able to access data stored in the Transparency Center. Each information request will be logged and monitored by the independent Swiss-based organization.”
Clearly the robustness of the third party oversight provisions will be essential to its Global Transparency Initiative winning trust.
Kaspersky’s activity in Switzerland will be overseen by an (as yet unnamed) independent third party which the company says will have “all access necessary to verify the trustworthiness of our products and business processes”, including: “Supervising and logging instances of Kaspersky Lab employees accessing product meta data received through KSN [Kaspersky Security Network] and stored in the Swiss data center; and organizing and conducting a source code review, plus other tasks aimed at assessing and verifying the trustworthiness of its products.
Switzerland will also host one of the dedicated Transparency Centers the company said last year that it would be opening as part of the wider program aimed at securing customer trust.
It expects the Swiss center to open this year, although the shifting of core infrastructure processes won’t be completed until Q4 2019. (It says on account of the complexity of redesigning infrastructure that’s been operating for ~20 years — estimating the cost of the project to be $12M.)
Within the Transparency Center, which Kaspersky will operate itself, the source code of its products and software updates will be available for review by “responsible stakeholders” — from the public and private sector.
It adds that the details of review processes — including how governments will be able to review code — are “currently under discussion” and will be made public “as soon as they are available”.
And providing government review in a way that does not risk further undermining customer trust may also provide a tricky balancing act for Kaspersky, given multi-directional geopolitical sensibilities, so the devil will be in the policy detail vis-a-vis “trusted” partners and whether the processes it deploys can reassure all of its customers all of the time.
“Trusted partners will have access to the company’s code, software updates and threat detection rules, among other things,” it writes, saying the Center will provide these third parties with: “Access to secure software development documentation; Access to the source code of any publicly released product; Access to threat detection rule databases; Access to the source code of cloud services responsible for receiving and storing the data of customers based in Europe, North America, Australia, Japan, South Korea and Singapore; Access to software tools used for the creation of a product (the build scripts), threat detection rule databases and cloud services”; along with “technical consultations on code and technologies”.
It is still intending to open two additional centers, one in North America and one in Asia, but precise locations have not yet been announced.
On supervision and review Kaspersky also says that it’s hoping to work with partners to establish an independent, non-profit organization for the purpose of producing professional technical reviews of the trustworthiness of the security products of multiple members — including but not limited to Kaspersky Lab itself.
Which would certainly go further to bolster trust. Though it has nothing firm to share about this plan as yet.
“Since transparency and trust are becoming universal requirements across the cybersecurity industry, Kaspersky Lab supports the creation of a new, non-profit organization to take on this responsibility, not just for the company, but for other partners and members who wish to join,” it writes on this.
Next month it’s also hosting an online summit to discuss “the growing need for transparency, collaboration and trust” within the cybersecurity industry.
Commenting in a statement, CEO Eugene Kaspersky, added: “In a rapidly changing industry such as ours we have to adapt to the evolving needs of our clients, stakeholders and partners. Transparency is one such need, and that is why we’ve decided to redesign our infrastructure and move our data processing facilities to Switzerland. We believe such action will become a global trend for cybersecurity, and that a policy of trust will catch on across the industry as a key basic requirement.”
Astronomers discover a strange pair of rogue planets wandering the Milky Way together. The free-range planets, which are each about 4 times the mass of Jupiter, orbit around each other rather than a star. [Published articles]
Over the past few years, the Hubble Space Telescope has observed what looked to be plumes of water vapor shooting from the surface of one of Jupiter's moons, Europa. Now, scientists have looked over decades-old data from Galileo and discovered that t...
Back in early 2013, the podcasting community was freaking out. A patent troll called Personal Audio LLC had sued comedian Adam Carolla and was threatening a bunch of smaller podcasters. Personal Audio claimed that the podcasters infringed U.S. Patent 8,112,504, which claims a “system for disseminating media content” in serialized episodes. EFF challenged the podcasting patent at the Patent Office in October 2013. We won that proceeding, and it was affirmed on appeal. Today, the Supreme Court rejected Personal Audio’s petition for review. The case is finally over.
We won this victory with the support of our community. More than one thousand people donated to EFF’s Save Podcasting campaign. We also asked the public to help us find prior art. We filed an inter partes review (IPR) petition that showed Personal Audio did not invent anything new, and that other people were podcasting years before Personal Audio first applied for a patent.
Meanwhile, Adam Carolla fought Personal Audio in federal court in the Eastern District of Texas. He also raised money for his defense and was eventually able to convince Personal Audio to walk away. When the settlement was announced, Personal Audio suggested that it would no longer sue small podcasters. That gave podcasters some comfort. But the settlement did not invalidate the patent.
In April 2015, EFF won at the Patent Office. The Patent Trial and Appeal Board (PTAB) invalidated all the challenged claims of the podcasting patent, finding that it should not have been issued in light of two earlier publications, one relating to CNN news clips and one relating to CBC online radio broadcasting. Personal Audio appealed that decision to the Federal Circuit.
The podcasting patent expired in October 2016, while the case was on appeal before the Federal Circuit. But that wouldn’t save podcasters who were active before the patent expired. The statute of limitations in patent cases is six years. If it could salvage its patent claims, Personal Audio could still sue for damages for years of podcasting done before the patent expired.
On August 7, 2017, the Federal Circuit affirmed the PTAB’s ruling invalidating all challenged claims. After this defeat, Personal Audio tried to get the Supreme Court to take its case. It argued that the IPR process is unconstitutional, raising arguments identical to those presented in the Oil States case. The Supreme Court rejected those arguments in its Oil States decision, issued last month. Personal Audio also argued that EFF should be bound by a jury verdict in a case between Personal Audio and CBS—an argument which made no sense, because that case involved different prior art and EFF was not a party.
Today, the Supreme Court issued an order denying Personal Audio’s petition for certiorari. With that ruling, the PTAB’s decision is now final and the patent claims Personal Audio asserted against podcasters are no longer valid. We thank everyone who supported EFF’s Save Podcasting campaign.
Original release date: May 14, 2018
The CERT Coordination Center (CERT/CC) has released information on email client vulnerabilities that can reveal plaintext versions of OpenPGP- and S/MIME-encrypted emails. A remote attacker could exploit these vulnerabilities to obtain sensitive information.
NCCIC encourages users and administrators to review CERT/CC’s Vulnerability Note VU #122919, apply the necessary mitigations, and refer to software vendors for appropriate patches, when available.
An anonymous reader quotes a report from Wired: The ubiquitous email encryption schemes PGP and S/MIME are vulnerable to attack, according to a group of German and Belgian researchers who posted their findings on Monday. The weakness could allow a hacker to expose plaintext versions of encrypted messages -- a nightmare scenario for users who rely on encrypted email to protect their privacy, security, and safety. The weakness, dubbed eFail, emerges when an attacker who has already managed to intercept your encrypted emails manipulates how the message will process its HTML elements, like images and multimedia styling. When the recipient gets the altered message and their email client -- like Outlook or Apple Mail -- decrypts it, the email program will also load the external multimedia components through the maliciously altered channel, allowing the attacker to grab the plaintext of the message. The eFail attack requires hackers to have a high level of access in the first place that, in itself, is difficult to achieve. They need to already be able to intercept encrypted messages, before they begin waylaying messages to alter them. PGP is a classic end-to-end encryption scheme that has been a go-to for secure consumer email since the late 1990s because of the free, open-source standard known as OpenPGP. But the whole point of doing the extra work to keep data encrypted from the time it leaves the sender to the time it displays for the receiver is to reduce the risk of access attacks -- even if someone can tap into your encrypted messages, the data will still be unreadable. eFail is an example of these secondary protections failing.
Read more of this story at Slashdot.
via GIPHY ECS is Amazon’s Elastic Container Service. That’s greek for how you get docker containers running in the cloud. It’s sort of like Kubernetes without all the bells and whistles. It takes a bit of getting used to, but This terraform how to, should get you moving. You need an EC2 host to run …
During Google I/O today, the company announced that Gboard would soon support Morse code, a move inspired by developer Tania Finlayson who communicates through head movements that are translated into Morse code and then into speech. She and her husba...
Researchers at Skidmore College conducted an eye-tracking experiment with 60 Skidmore students and found that two spaces at the end of a period slightly improved the processing of text during reading. Ars Technica reports the findings: Previous cognitive science research has been divided on the issue. Some research has suggested closer spacing of the beginning of a new sentence may allow a reader to capture more characters in their parafoveal vision -- the area of the retina just outside the area of focus, or fovea -- and thus start processing the information sooner (though experimental evidence of that was not very strong). Other prior research has inferred that an extra space prevents lateral interference in processing text, making it easier for the reader to identify the word in focus. But no prior research found by [study authors] Johnson, Bui, and Schmitt actually measured reader performance with each typographic scheme. First, they divided their group of 60 research subjects by way of a keyboard task -- the subjects typed text dictated to them into a computer and were sorted into "one-spacers" (39 regularly put a single space between sentences) and "two-spacers" (21 hit that space bar twice consistently after a period). Every student subject used but a single space after each comma. Having identified subjects' proclivities, the researchers then gave them 21 paragraphs to read (including one practice paragraph) on a computer screen and tracked their eye movement as they read using an Eyelink 1000 video-based eye tracking system. [...] The "one-spacers" were, as a group, slower readers across the board (by about 10 words per minute), and they showed statistically insignificant variation across all four spacing practices. And "two-spacers" saw a three-percent increase in reading speed for paragraphs in their own favored spacing scheme. The controversial part of the study has to do with the 14 point Courier New font that the researchers presented to the students. "Courier New is a fixed-width font that resembles typewritten text -- used by hardly anyone for documents," reports Ars. "Even the APA suggests using 12 point Times Roman, a proportional-width font. Fixed-width fonts make a double-space more pronounced."
Read more of this story at Slashdot.
One effect of the Snowden leaks is that the NSA now makes an annual disclosure of the extent of its domestic surveillance operations; that's how we know that the NSA collected 534 million phone call and text message records (time, date, location, from, to -- but not the content), which more than triples its surveillance takings in 2016. (more…)
For the first time in two decades, a huge number of books, films, and other works will escape U.S. copyright law. From a report: The Great American Novel enters the public domain on January 1, 2019 -- quite literally. Not the concept, but the book by William Carlos Williams. It will be joined by hundreds of thousands of other books, musical scores, and films first published in the United States during 1923. It's the first time since 1998 for a mass shift to the public domain of material protected under copyright. It's also the beginning of a new annual tradition: For several decades from 2019 onward, each New Year's Day will unleash a full year's worth of works published 95 years earlier. This coming January, Charlie Chaplin's film The Pilgrim and Cecil B. DeMille's The 10 Commandments will slip the shackles of ownership, allowing any individual or company to release them freely, mash them up with other work, or sell them with no restriction. This will be true also for some compositions by Bela Bartok, Aldous Huxley's Antic Hay, Winston Churchill's The World Crisis, Carl Sandburg's Rootabaga Pigeons, E.E. Cummings's Tulips and Chimneys, Noel Coward's London Calling! musical, Edith Wharton's A Son at the Front, many stories by P.G. Wodehouse, and hosts upon hosts of forgotten works, according to research by the Duke University School of Law's Center for the Study of the Public Domain. Throughout the 20th century, changes in copyright law led to longer periods of protection for works that had been created decades earlier, which altered a pattern of relatively brief copyright protection that dates back to the founding of the nation. This came from two separate impetuses. First, the United States had long stood alone in defining copyright as a fixed period of time instead of using an author's life plus a certain number of years following it, which most of the world had agreed to in 1886. Second, the ever-increasing value of intellectual property could be exploited with a longer term. But extending American copyright law and bringing it into international harmony meant applying "patches" retroactively to work already created and published. And that led, in turn, to lengthy delays in copyright expiring on works that now date back almost a century.
Read more of this story at Slashdot.
Long-time Slashdot reader Martin S. pointed us to this an excerpt from the new book Live Work Work Work Die: A Journey into the Savage Heart of Silicon Valley by Portland-based investigator reporter Corey Pein. The author shares what he realized at a job recruitment fair seeking Java Legends, Python Badasses, Hadoop Heroes, "and other gratingly childish classifications describing various programming specialities." I wasn't the only one bluffing my way through the tech scene. Everyone was doing it, even the much-sought-after engineering talent. I was struck by how many developers were, like myself, not really programmers, but rather this, that and the other. A great number of tech ninjas were not exactly black belts when it came to the actual onerous work of computer programming. So many of the complex, discrete tasks involved in the creation of a website or an app had been automated that it was no longer necessary to possess knowledge of software mechanics. The coder's work was rarely a craft. The apps ran on an assembly line, built with "open-source", off-the-shelf components. The most important computer commands for the ninja to master were copy and paste... [M]any programmers who had "made it" in Silicon Valley were scrambling to promote themselves from coder to "founder". There wasn't necessarily more money to be had running a startup, and the increase in status was marginal unless one's startup attracted major investment and the right kind of press coverage. It's because the programmers knew that their own ladder to prosperity was on fire and disintegrating fast. They knew that well-paid programming jobs would also soon turn to smoke and ash, as the proliferation of learn-to-code courses around the world lowered the market value of their skills, and as advances in artificial intelligence allowed for computers to take over more of the mundane work of producing software. The programmers also knew that the fastest way to win that promotion to founder was to find some new domain that hadn't yet been automated. Every tech industry campaign designed to spur investment in the Next Big Thing -- at that time, it was the "sharing economy" -- concealed a larger programme for the transformation of society, always in a direction that favoured the investor and executive classes. "I wasn't just changing careers and jumping on the 'learn to code' bandwagon," he writes at one point. "I was being steadily indoctrinated in a specious ideology."
Read more of this story at Slashdot.
Like the Spanish Inquisition, nobody expects cascading failures. Here's how Google handles them.
This excerpt is a particularly interesting and comprehensive chapter—Chapter 22 - Addressing Cascading Failures—from Google's awesome book on Site Reliability Engineering. Worth reading if it hasn't been on your radar. And it's free!
Written by Mike Ulrich
If at first you don't succeed, back off exponentially."
Dan Sandler, Google Software Engineer
Why do people always forget that you need to add a little jitter?"
Ade Oshineye, Google Developer Advocate
A cascading failure is a failure that grows over time as a result of positive feedback.107 It can occur when a portion of an overall system fails, increasing the probability that other portions of the system fail. For example, a single replica for a service can fail due to overload, increasing load on remaining replicas and increasing their probability of failing, causing a domino effect that takes down all the replicas for a service.
Over the past 28 years, the Hubble Space Telescope has inspired a generation of astronomers with insanely dramatic views of the universe, but it's hardly done blowing our minds. NASA has unveiled a new fly-through video of the Lagoon Nebula. Located...
Thank you to Neil Gaiman for Norse Mythology. I was just held hostage by two young girls (8/10) until I finished the book. Watching my youngest reenact the battles of Ragnarok was magical. [Published articles]
I really like Neil Gaiman as an author and look forward to all of his new books. I ended up getting Norse Mythology because I thought it might an interesting read and my girls saw it and wanted in. We ended up reading a chapter/story every night and they couldn't get enough of it. Last night we came to the ending story of Ragnarok and they were totally entranced. I try and do voices and act out things when I read to them and my youngest took it a step farther by acting out the battle in her room while I was reading. Experiences like this make being a parent special.
I found that the writing was really easy to get into. It felt like I was an old storyteller relating long lost legends to a new generation (which I guess I was). I really hope that he decides to tackle other mythologies because I would love to be able to share more old stories with my kids.
As of last week, Superman celebrated his 80th year as the world’s most recognizable superhero. Tons of conversations have been happening about favorite stories and moments in the Kal-El canon, and it’s worth thinking about the ones that came out since dawn of the new millennium.
MEDantex, a Kansas-based company that provides medical transcription services for hospitals, clinics and private physicians, took down its customer Web portal last week after being notified by KrebsOnSecurity that it was leaking sensitive patient medical records — apparently for thousands of physicians.
On Friday, KrebsOnSecurity learned that the portion of MEDantex’s site which was supposed to be a password-protected portal physicians could use to upload audio-recorded notes about their patients was instead completely open to the Internet.
What’s more, numerous online tools intended for use by MEDantex employees were exposed to anyone with a Web browser, including pages that allowed visitors to add or delete users, and to search for patient records by physician or patient name. No authentication was required to access any of these pages.
This exposed administrative page from MEDantex’s site granted anyone complete access to physician files, as well as the ability to add and delete authorized users.
Several MEDantex portal pages left exposed to the Web suggest that the company recently was the victim of WhiteRose, a strain of ransomware that encrypts a victim’s files unless and until a ransom demand is paid — usually in the form of some virtual currency such as bitcoin.
Contacted by KrebsOnSecurity, MEDantex founder and chief executive Sreeram Pydah confirmed that the Wichita, Kansas based transcription firm recently rebuilt its online servers after suffering a ransomware infestation. Pydah said the MEDantex portal was taken down for nearly two weeks, and that it appears the glitch exposing patient records to the Web was somehow incorporated into that rebuild.
“There was some ransomware injection [into the site], and we rebuilt it,” Pydah said, just minutes before disabling the portal (which remains down as of this publication). “I don’t know how they left the documents in the open like that. We’re going to take the site down and try to figure out how this happened.”
It’s unclear exactly how many patient records were left exposed on MEDantex’s site. But one of the main exposed directories was named “/documents/userdoc,” and it included more than 2,300 physicians listed alphabetically by first initial and last name. Drilling down into each of these directories revealed a varying number of patient records — displayed and downloadable as Microsoft Word documents and/or raw audio files.
Although many of the exposed documents appear to be quite recent, some of the records dated as far back as 2007. It’s also unclear how long the data was accessible, but this Google cache of the MEDantex physician portal seems to indicate it was wide open on April 10, 2018.
Among the clients listed on MEDantex’s site include New York University Medical Center; San Francisco Multi-Specialty Medical Group; Jackson Hospital in Montgomery Ala.; Allen County Hospital in Iola, Kan; Green Clinic Surgical Hospital in Ruston, La.; Trillium Specialty Hospital in Mesa and Sun City, Ariz.; Cooper University Hospital in Camden, N.J.; Sunrise Medical Group in Miami; the Wichita Clinic in Wichita, Kan.; the Kansas Spine Center; the Kansas Orthopedic Center; and Foundation Surgical Hospitals nationwide. MEDantex’s site states these are just some of the healthcare organizations partnering with the company for transcription services.
Unfortunately, the incident at MEDantex is far from an anomaly. A study of data breaches released this month by Verizon Enterprise found that nearly a quarter of all breaches documented by the company in 2017 involved healthcare organizations.
Verizon says ransomware attacks account for 85 percent of all malware in healthcare breaches last year, and that healthcare is the only industry in which the threat from the inside is greater than that from outside.
“Human error is a major contributor to those stats,” the report concluded.
Source: Verizon Business 2018 Data Breach Investigations Report.
According to a story at BleepingComputer, a security news and help forum that specializes in covering ransomware outbreaks, WhiteRose was first spotted about a month ago. BleepingComputer founder Lawrence Abrams says it’s not clear how this ransomware is being distributed, but that reports indicate it is being manually installed by hacking into Remote Desktop services.
Fortunately for WhiteRose victims, this particular strain of ransomware is decryptable without the need to pay the ransom.
“The good news is this ransomware appears to be decryptable by Michael Gillespie,” Abrams wrote. “So if you become infected with WhiteRose, do not pay the ransom, and instead post a request for help in our WhiteRose Support & Help topic.”
Ransomware victims may also be able to find assistance in unlocking data without paying from nomoreransom.org.
KrebsOnSecurity would like to thank India-based cybersecurity startup Banbreach for the heads up about this incident.
An anonymous reader quotes a report from Ars Technica: A newly published "exploit chain" for Nvidia Tegra X1-based systems seems to describe an apparently unpatchable method for running arbitrary code on all currently available Nintendo Switch consoles. Hardware hacker Katherine Temkin and the hacking team at ReSwitched released an extensive outline of what they're calling the Fusee Gelee coldboot vulnerability earlier today, alongside a proof-of-concept payload that can be used on the Switch. "Fusee Gelee isn't a perfect, 'holy grail' exploit -- though in some cases it can be pretty damned close," Temkin writes in an accompanying FAQ. The exploit, as outlined, makes use of a vulnerability inherent in the Tegra X1's USB recovery mode, circumventing the lock-out operations that would usually protect the chip's crucial bootROM. By sending a bad "length" argument to an improperly coded USB control procedure at the right point, the user can force the system to "request up to 65,535 bytes per control request." That data easily overflows a crucial direct memory access (DMA) buffer in the bootROM, in turn allowing data to be copied into the protected application stack and giving the attacker the ability to run arbitrary code. The exploit can't be fixed via a downloadable patch because the flawed bootROM can't be modified once the Tegra chip leaves the factory. As Temkin writes, "unfortunately, access to the fuses needed to configure the device's ipatches was blocked when the ODM_PRODUCTION fuse was burned, so no bootROM update is possible. It is suggested that consumers be made aware of the situation so they can move to other devices, where possible." Ars notes that Nintendo may however be able to detect "hacked" systems when they sign on to Nintendo's servers. "The company could then ban those systems from using the Switch's online functions."
Read more of this story at Slashdot.
Some people, when they look up at the sky and see a cloud, think “dog” or “fluffy.” And some people think “it’s a waning cumulus with a feathered edge suggesting a pressure system from the north ending in an updraft, which would probably cause turbulence. Also looks a bit like a dog.” Clearly one of those people created these complex, beautiful renderings of weather data.
The idea behind this project at ETH Zürich, led by Markus Gross, is that different visualizations of detailed weather data may be highly useful in different fields. He and his colleagues have been working on a huge set of such data and finding ways of accurately representing it with an eye to empowering meteorologists from the TV station to the research lab.
“The scientific value of our visualisation lies in the fact that we make something visible that was impossible to see with the existing tools,” explained undergraduate researcher Noël Rimensberger in an ETHZ news release. Representing weather “in a relatively simple, comprehensible way” is its own reward, really.
The data in question are all from the evening of April 26, 2013, the date chosen for a large-scale meteorology project in which multiple institutions collaborated. The team created different ways to visualize different bodies of data.
For instance, if you were looking down on a whole county, what’s the use of seeing every little ripple of a cloud system? What you need is larger trends and ways of picking out important data points, such as areas likely to develop precipitation, or where the beginnings of movement suggest a cold front moving in.[gallery ids="1627090,1627089,1627088,1627093,1627087"]
On the other hand, such macro data has no place when you’re looking at the formation of clouds over a single locality, or why a storm seems to have struck with unnatural fierceness there.
And again, what if you’re a small aircraft pilot? A little rain and clouds you might not mind, but what if you want to see patterns of turbulence in the country and how they move as the day wears on? Or if you’re investigating what led to a crash at a particular location and time?
These visualizations show how a large set of data can be interpreted and displayed in many ways and to many purposes.
Tobias Günther, Rimensberger’s supervisor on the project, pointed out that the algorithms they used to interpret the reams of data and create these simulations are far too slow at present, but they’re working on improving them. Still, some could be used if time isn’t of the essence.
You can find a link to download the full paper, created for an ETH Zürich visualization contest, at the university’s website.
TIL how the UK military recruiter mistook "cryptogamist" (algae expert) for "cryptogramist" and sent Geoffrey Tandy to join the code breakers; he wasn't so useful until captured German papers arrived water-logged; with his expertise they salvaged them, cracked the code, and hastened the victory. [Published articles]
NASA has released incredible new images of the Lagoon Nebula taken by the Hubble space telescope, in honor of its 28th anniversary and presumably 4/20. Dude... have you ever like... thought about how small we are... and how big the universe is...?
"Those who designed our digital world are aghast at what they created," argues a new article in New York Magazine titled "The Internet Apologizes". Today, the most dire warnings are coming from the heart of Silicon Valley itself. The man who oversaw the creation of the original iPhone believes the device he helped build is too addictive. The inventor of the World Wide Web fears his creation is being "weaponized." Even Sean Parker, Facebook's first president, has blasted social media as a dangerous form of psychological manipulation. "God only knows what it's doing to our children's brains," he lamented recently... The internet's original sin, as these programmers and investors and CEOs make clear, was its business model. To keep the internet free -- while becoming richer, faster, than anyone in history -- the technological elite needed something to attract billions of users to the ads they were selling. And that something, it turns out, was outrage. As Jaron Lanier, a pioneer in virtual reality, points out, anger is the emotion most effective at driving "engagement" -- which also makes it, in a market for attention, the most profitable one. By creating a self-perpetuating loop of shock and recrimination, social media further polarized what had already seemed, during the Obama years, an impossibly and irredeemably polarized country... What we're left with are increasingly divided populations of resentful users, now joined in their collective outrage by Silicon Valley visionaries no longer in control of the platforms they built. Lanier adds that "despite all the warnings, we just walked right into it and created mass behavior-modification regimes out of our digital networks." Sean Parker, the first president of Facebook, is even quoted as saying that a social-validation feedback loop is "exactly the kind of thing that a hacker like myself would come up with, because you're exploiting a vulnerability in human psychology. The inventors, creators -- it's me, it's Mark [Zuckerberg], it's Kevin Systrom on Instagram, it's all of these people -- understood this consciously. And we did it anyway." The article includes quotes from Richard Stallman, arguing that data privacy isn't the problem. "The problem is that these companies are collecting data about you, period. We shouldn't let them do that. The data that is collected will be abused..." He later adds that "We need a law that requires every system to be designed in a way that achieves its basic goal with the least possible collection of data... No company is so important that its existence justifies setting up a police state." The article proposes hypothetical solutions. "Could a subscription model reorient the internet's incentives, valuing user experience over ad-driven outrage? Could smart regulations provide greater data security? Or should we break up these new monopolies entirely in the hope that fostering more competition would give consumers more options?" Some argue that the Communications Decency Act of 1996 shields internet companies from all consequences for bad actors -- de-incentivizing the need to address them -- and Marc Benioff, CEO of Salesforce, thinks the solution is new legislation. "The government is going to have to be involved. You do it exactly the same way you regulated the cigarette industry. Technology has addictive qualities that we have to address, and product designers are working to make those products more addictive. We need to rein that back."
Read more of this story at Slashdot.
In a news bulletin, University of California, Berkeley announces that its "Foundations of Data Science" course is "being offered free online this spring for the first time through the campus's online education hub, edX." From the report: The course -- Data 8X (Foundations of Data Science) -- covers everything from testing hypotheses, applying statistical inferences, visualizing distributions and drawing conclusions, all while coding in Python and using real-world data sets. One lesson might take economic data from different countries over the years to track global economic growth. The next might use a data set of cell samples to create a classification algorithm that can diagnose breast cancer. (Learn more from a video on the Berkeley data science website.) The online program is based on the Foundations of Data Science course that Berkeley launched on campus in 2015 and now has more than 1,000 students enrolling every semester. The Foundations of Data Science edX Professional Certificate program is a sequence of three five-week courses taught by three winners of Berkeley's top teaching honor, the Distinguished Teaching Award: DeNero, statistics professor Ani Adhikari and computer science professor David Wagner. The first of the three parts has already started (9 a.m. on April 2), but enrollment will remain open after the course begins. Furthermore, anyone in the world can enroll for free but those who want to earn the certificate will need to pay.
Read more of this story at Slashdot.
Slashdot reader silverdirk writes: Compiled languages have long provided access to the OpenGL API, and even most scripting languages have had OpenGL bindings for a decade or more. But, one significant language missing from the list is our old friend/nemesis Bash. But worry no longer! Now you can create your dazzling 3D visuals right from the comfort of your command line! "You'll need a system with both Bash and OpenGL support to experience it firsthand," explains software engineer Michael Conrad, who created the first version 13 years ago as "the sixth in a series of 'Abuse of Technology' projects," after "having my technical sensibilities offended that someone had written a real-time video game in Perl. "Back then, my primary language was C++, and I was studying OpenGL for video game purposes. I declared to my friends that the only thing worse would be if it had been 3D and written in Bash. Having said the idea out loud, it kept prodding me, and I eventually decided to give it a try to one-up the 'awfulness'..."
Read more of this story at Slashdot.