News 2019

 

 

News 2019

 

 

 

 

 

 

 

 

 

 

 

 

August 2019:

 

 

 

 

 

 

 

 

 

 

 

 

 

THE NEUROLOGIST WHO HACKED HIS BRAIN—AND ALMOST LOST HIS MIND

 

 

 

 

 

 

 

Researcher as Human Guinea Pig

Phil Kennedy hired a neurosurgeon in Belize to implant several electrodes in his brain and then insert a set of electronic components beneath his scalp. Back at home, Kennedy used this system to record his own brain signals in a months-long battery of experiments. His goal: Crack the neural code of human speech.

After that, Kennedy still had trouble finding words for things—he might look at a pencil and call it a pen—but his fluency improved. Once Cervantes felt his client had gotten halfway back to normal, he cleared him to go home. His early fears of having damaged Kennedy for life turned out to be unfounded; the language loss that left his patient briefly locked in was just a symptom of postoperative brain swelling. With that under control, he would be fine.

By the time Kennedy was back at his office seeing patients just a few days later, the clearest remaining indications of his Central American adventure were some lingering pronunciation problems and the sight of his shaved and bandaged head, which he sometimes hid beneath a multicolored Belizean hat. For the next several months, Kennedy stayed on anti-seizure medications as he waited for his neurons to grow inside the three cone electrodes in his skull.

Then, in October that same year, Kennedy flew back to Belize for a second surgery, this time to have a power coil and radio transceiver connected to the wires protruding from his brain. That surgery went fine, though both Powton and Cervantes were nonplussed at the components that Kennedy wanted tucked under his scalp. “I was a little surprised they were so big,” Powton says. The electronics had a clunky, retro look to them. Powton, who tinkers with drones in his spare time, was mystified that anyone would sew such an old-fangled gizmo inside his head: “I was like, ‘Haven’t you heard of microelectronics, dude?’ ”

Kennedy began the data-gathering phase of his grand self-experiment as soon as he returned home from Belize for the second time. The week before Thanksgiving, he went into his lab and balanced a magnetic power coil and receiver on his head. Then he started to record his brain activity as he said different phrases out loud and to himself—things like “I think she finds the zoo fun” and “The joy of a job makes a boy say wow”—while tapping a button to help sync his words with his neural traces, much like the way a filmmaker’s clapper board syncs picture and sound.

Over the next seven weeks, he spent most days seeing patients from 8 am until 3:30 pm and then used the evenings after work to run through his self-administered battery of tests. In his laboratory notes he is listed as Subject PK, as if to anonymize himself. His notes show that he went into the lab on Thanksgiving and on Christmas Eve.

The experiment didn’t last as long as he would have liked. The incision in his scalp never fully closed over the bulky mound of his electronics. After having had the full implant in his head for a total of just 88 days, Kennedy went back under the knife. But this time he didn’t bother going to Belize: A surgery to safeguard his health needed no approval from the FDA and would be covered by his regular insurance.

On January 13, 2015, a local surgeon opened up Kennedy’s scalp, snipped the wires coming from his brain, and removed the power coil and transceiver. He didn’t try to dig around in Kennedy’s cortex for the tips of the three glass cone electrodes that were embedded there. It was safer to leave those where they lay, enmeshed in Kennedy’s brain tissue, for the rest of his life.

Loss for Words

Yes, it’s possible to communicate directly via your brain waves. But it’s excruciatingly slow. Other substitutes for speech get the job done faster.

Kennedy’s lab sits in a leafy office park on the outskirts of Atlanta, in a yellow clapboard house. A shingle hanging out front identifies Suite B as the home of the Neural Signals Lab. When I meet Kennedy there one day in May 2015, he’s dressed in a tweed jacket and a blue-flecked tie, and his hair is neatly parted and brushed back from his forehead in a way that reveals a small depression in his left temple. “That’s when he was putting the electronics in,” Kennedy says with a slight Irish accent. “The retractor pulled on a branch of the nerve that went to my temporalis muscle. I can’t lift this eyebrow.” Indeed, I notice that the operation has left his handsome face with an asymmetric droop.

Kennedy agrees to show me the video of his first surgery in Belize, which has been saved to an old-fashioned CD-ROM. As I mentally prepare myself to see the exposed brain of the man standing next to me, Kennedy places the disc into the drive of a desktop computer running Windows 95. It responds with an awful grinding noise, like someone slowly sharpening a knife.

The disc takes a long time to load—so long that we have time to launch into a conversation about his highly unconventional research plan. “Scientists have to be individuals,” he says. “You can’t do science by committee.” As he goes on to talk about how the US too was built by individuals and not committees, the disc drive’s grunting takes on the timbre of a wagon rolling down a rocky trail: ga-chugga-chug, ga-chugga-chug. “Come on, machine!” he says, interrupting his train of thought as he clicks impatiently at some icons on the screen. “Oh for heaven’s sake, I just have inserted the disc!”

“We’ll extract our brains and connect them to computers that will do everything for us,” Kennedy says. “And the brains will live on.”

I think people overrate brain surgery as being so terribly dangerous,” he goes on. “Brain surgery is not that difficult.” Ga-chugga-chug, ga-chugga-chug, ga-chugga-chug. “If you’ve got something to do scientifically, you just have to go and do it and not listen to naysayers.”

At last a video player window opens on the PC, revealing an image of Kennedy’s skull, his scalp pulled away from it with clamps. The grunting of the disc drive is replaced by the eerie, squeaky sound of metal bit on bone. “Oh, so they’re still drilling my poor head,” he says as we watch his craniotomy begin to play out onscreen.

“Just helping ALS patients and locked-in patients is one thing, but that’s not where we stop,” Kennedy says, moving on to the big picture. “The first goal is to get the speech restored. The second goal is to restore movement, and a lot of people are working on that—that’ll happen, they just need better electrodes. And the third goal would then be to start enhancing normal humans.”

He clicks the video ahead, to another clip in which we see his brain exposed—a glistening patch of tissue with blood vessels crawling all along the top. Cervantes pokes an electrode down into Kennedy’s neural jelly and starts tugging at the wire. Every so often a blue-gloved hand pauses to dab the cortex with a Gelfoam to stanch a plume of blood.

“Your brain will be infinitely more powerful than the brains we have now,” Kennedy continues, as his brain pulsates onscreen. “We’re going to extract our brains and connect them to small computers that will do everything for us, and the brains will live on.” You’re excited for that to happen?” I ask.

“Pshaw, yeah, oh my God,” he says. “This is how we’re evolving.”

Sitting there in Kennedy’s office, staring at his old computer monitor, I’m not so sure I agree. It seems like technology always finds new and better ways to disappoint us, even as it grows more advanced every year. My smartphone can build words and sentences from my sloppy finger-swipes. But I still curse at its mistakes. (Damn you, autocorrect!) I know that, around the corner, technology far better than Kennedy’s juddering computer, his clunky electronics, and my Google Nexus 5 phone is on its way. But will people really want to entrust their brains to it?

On the screen, Cervantes jabs another wire through Kennedy’s cortex. “The surgeon is very good, actually, a very nice pair of hands,” Kennedy said when we first started watching the video. But now he deviates from our discussion about evolution to bark orders at the screen, like a sports fan in front of a TV. “No, don’t do that, don’t lift it up,” Kennedy says to the pair of hands operating on his brain. “It shouldn’t go in at that angle,” he explains to me before turning back to the computer. “Push it in more than that!” he says. “OK, that’s plenty, that’s plenty. Don’t push anymore!”

These days, invasive brain implants have been going out of style. The major funders of neural prosthesis research favor an approach that involves laying a flat grid of electrodes, 8 by 8 or 16 by 16 of them, across the naked surface of the brain. This method, called electrocorticography, or ECoG, provides a more blurred-out, impressionistic measure of activity than Kennedy’s: Instead of tuning to the voices of single neurons, it listens to a bigger chorus—or, I suppose, committee—of them, as many as hundreds of thousands of neurons at a time.

Proponents of ECoG argue that these choral traces can convey enough information for a computer to decode the brain’s intent—even what words or syllables a person means to say. Some smearing of the data might even be a boon: You don’t want to fixate on a single wonky violinist when it takes a symphony of neurons to move your vocal cords and lips and tongue. The ECoG grid can also safely stay in place under the skull for a long time, perhaps even longer than Kennedy’s cone electrodes. “We don’t really know what the limits are, but it’s definitely years or decades,” says Edward Chang, a surgeon and neurophysiologist at UC San Francisco, who has become one of the leading figures in the field and who is working on a speech prosthesis of his own.

Last summer, as Kennedy was gathering his data to present it at the 2015 meeting of the Society for Neuroscience, another lab published a new procedure for using computers and cranial implants to decode human speech. Called Brain-to-Text, it was developed at the Wadsworth Center in New York in collaboration with researchers in Germany and the Albany Medical Center, and it was tested on seven epileptic patients with implanted ECoG grids. Each subject was asked to read aloud—sections of the Gettysburg Address, the story of Humpty Dumpty, John F. Kennedy’s inaugural, and an anonymous piece of fan fiction related to the TV show Charmed—while their neural data was recorded. Then the researchers used the ECoG traces to train software for converting neural data into speech sounds and fed its output into a predictive language model—a piece of software that works a bit like the speech-to-text engine on your phone—that could guess which words were coming based on what had come before.

Kennedy is tired of the Zeno’s paradox of human progress. He has no patience for getting halfway to the future. That’s why he adamantly pushes forward.

Incredibly, the system kind of worked. The computer spat out snippets of text that bore more than a passing resemblance to Humpty Dumpty, Charmed fan fiction, and the rest. “We got a relationship,” says Gerwin Schalk, an ECoG expert and coauthor of the study. “We showed that it reconstructed spoken text much better than chance.” Earlier speech prosthesis work had shown that individual vowel sounds and consonants could be decoded from the brain; now Schalk’s group had shown that it’s possible—though difficult and error-prone—to go from brain activity to fully spoken sentences.

But even Schalk admits that this was, at best, a proof of concept. It will be a long time before anyone starts sending fully formed thoughts to a computer, he says—and even longer before anyone finds it really useful. Think about speech-recognition software, which has been around for decades, Schalk says. “It was probably 80 percent accurate in 1980 or something, and 80 percent is a pretty remarkable achievement in terms of engineering. But it’s useless in the real world,” he says. “I still don’t use Siri, because it’s not good enough.”

In the meantime, there are far simpler and more functional ways to help people who have trouble speaking. If a patient can move a finger, he can type out messages in Morse code. If a patient can move her eyes, she can use eye-tracking software on a smartphone. “These devices are dirt cheap,” Schalk says. “Now you want to replace one of these with a $100,000 brain implant and get something that’s a little better than chance?”

I try to square this idea with all the stunning cyborg demonstrations that have made their way into the media over the years—people drinking coffee with robotic arms, people getting brain implants in Belize. The future always seems so near at hand, just as it did a half century ago when José Delgado stepped into that bullring. One day soon we’ll all be brains inside computers; one day soon our thoughts and feelings will be uploaded to the Internet; one day soon our mental states will be shared and data-mined. We can already see the outlines of this scary and amazing place just on the horizon—but the closer we get, the more it seems to fall back into the distance.

Kennedy, for one, has grown tired of this Zeno’s paradox of human progress; he has no patience for always getting halfway to the future. That’s why he adamantly pushes forward: to prepare us all for the world he wrote about in 2051, the one that Delgado believed was just around the corner.

When Kennedy finally did present the data that he’d gathered from himself—first at an Emory University symposium last May and then at the Society for Neuroscience conference in October—some of his colleagues were tentatively supportive. By taking on the risk himself, by working alone and out-of-pocket, Kennedy managed to create a sui generis record of language in the brain, Chang says: “It’s a very precious set of data, whether or not it will ultimately hold

the secret for a speech prosthetic. It’s truly an extraordinary event.” Other colleagues found the story thrilling, even if they were somewhat baffled: In a field that is constantly hitting up against ethical roadblocks, this man they’d known for years, and always liked, had made a bold and unexpected bid to force brain research to its destiny. Still other scientists were simply aghast. “Some thought I was brave, some thought I was crazy,” Kennedy says.

In Georgia, I ask Kennedy if he’d ever do the experiment again. “On myself?” he says. “No. I shouldn’t do this again. I mean, certainly not on the same side.” He taps his temple, where the cone electrode tips are still lodged. Then, as if energized by the idea of putting implants on the other side of his brain, he launches into plans for making new electrodes and more sophisticated implants; for getting back the FDA’s approval for his work; for finding grants so that he can pay for everything.

“No, I shouldn’t do the other side,” he says finally. “Anyway, I don’t have the electronics for it. Ask me again when we’ve built them.” Here’s what I take from my time with Kennedy, and from his garbled answer: You can’t always plan your path into the future. Sometimes you have to build it first.” Wired

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

July 2019:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now Some Families Are Hiring Coaches to Help Them Raise Phone-Free Children

 

 

 

 

 

 

Screen consultants are here to help you remember life before smartphones and tablets.

Parents around the country, are trying to turn back time to the era before smartphones. But it’s not easy to remember what exactly things were like before smartphones. So they’re hiring professionals.

A new screen-free parenting coach economy has sprung up to serve the demand. Screen consultants come into homes, schools, churches and synagogues to remind parents how people parented before.

Rhonda Moskowitz is a parenting coach in Columbus, Ohio. She has a master’s degree in K-12 learning and behavior disabilities, and over 30 years experience in schools and private practice. She barely needs any of this training now.

“I try to really meet the parents where they are, and now often it is very simple: ‘Do you have a plain old piece of material that can be used as a cape?’” said Ms. Moskowitz. “‘Great!’”

Is there a ball somewhere? Throw the ball,’” she said. “‘Kick the ball.’”

Among affluent parents, fear of phones is rampant, and it’s easy to see why. The wild look their kids have when they try to pry them off Fortnights is alarming. Most parents suspect dinnertime probably shouldn’t be spent on Instagram. The YouTube recommendation engine seems like it could make a young radical out of anyone. Now, major media outlets are telling them their children might grow smartphone-related skull horns.

No one knows what screens will make of society, good or bad. This worldwide experiment of giving everyone an exciting piece of hand-held technology is still new.

Gloria DeGaetano was a private coach working in Seattle to wean families off screens when she noticed the demand was higher than she could handle on her own. She launched the Parent Coaching Institute, a network of 500 coaches and a training program. Her coaches in small cities and rural areas charge $80 an hour. In larger cities, rates range from $125 to $250. Parents typically sign up for eight to 12 sessions.

“If you mess with Mother Nature, it messes with you,” Ms. DeGaetano said of her philosophy. “You can’t be a machine. We’re thinking like machines because we live in this mechanistic milieu. You can’t grow children optimally from principles in a mechanistic mind-set.”

Screen addiction is the top issue parents hope she can cure. Her prescriptions are often absurdly basic.

“Movement,” Ms. DeGaetano said. “Is there enough running around that will help them see their autonomy? Is there a jungle gym or a jumping rope?”

Nearby, Emily Cherkin was teaching middle school in Seattle when she noticed families around her panicked over screens and coming to her for advice. She took surveys of middle school students and teachers in the area.

“I realized I really have a market here,” she said. “There’s a need.”

She quit teaching and opened two small businesses. There’s her intervention work as the Screentime Consultant — and now there’s a co-working space attached to a play space for kids needing “Screentime-Alternative” activities. (That’s playing with blocks and painting.)

A movement reminiscent of the “virginity pledge” — a vogue in the late ’90s in which young people promised to wait until marriage to have sexual contact- is bubbling up across the country.

In this 21st-century version, a group of parents band together and make public promises to withhold smartphones from their children until eighth grade. Now there are local groups cropping up. Parents can gather for phone-free camaraderie in the Turning Life on support community.

Parents who make these pledges work to promote the idea of healthy adult phone use, and promise complete abstinence until eighth grade or even later.

Susannah Baxley’s daughter is in fifth grade.

“I have told her she can have access to social media when she goes to college,” said Ms. Baxley, who is now organizing a phone-delay pledge. So far, she has about 50 parents signed on.

Do parents need the peer pressure of promises, and coaches telling them how to parent?

“It’s not that challenging, be attentive to your phone use, notice the ways it interferes with being present,” said Erica Reischer, a psychologist and parent coach in San Francisco. “There’s this commercialization of everything that can be commercialized, including this now.”

To Dr. Reischer, the new consultant boom and screen addiction are part of the same problem.

“It’s part of the mind-set that gets us stuck on our phones in the first place — the optimization efficiency mind-set,” Dr. Reischer said. “We want answers served up to us — ‘Just tell me what to do, and I’ll do it.’”

But what seems self-evident can be hard to remember, and hard to stick with.

“Yes, it’s just hearing something that’s so blatantly obvious, but I couldn’t see it,” said Julie Wasserstrom, a 43-year-old mother of two in Bexley, Ohio.

She hired Ms. Moskowitz and found the advice useful.

“She just said things like, ‘Are you telling your kids, ‘No screens at the table — but your phone is on your lap?’” Ms. Wasserstrom said. “When we were growing up, we didn’t have these, so our parents couldn’t role model appropriate behaviors to us, and we have to learn what is appropriate so we can role model that for them.”

Ms. Wasserstrom compared screens to a knife or a hot stove.

“You won’t send your kid into the kitchen with a hot stove without giving them instructions or just hand them a knife,” Ms. Wasserstrom said. “You have to be a role model on safe ways to use a knife.” NY Times

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Adding a Microchip to Your Brain?

You might risk losing yourself

 

 

 

 

 

 

 

As artificial intelligence creates large-scale unemployment, some professionals are attempting to maintain intellectual parity by adding microchips to their brains. Even aside from career worries, it’s not difficult to understand the appeal of merging with A.I. After all, if enhancement leads to superintelligence and extreme longevity, isn’t it better than the alternative — the inevitable degeneration of the brain and body?

At the Center for Mind Design in Manhattan, customers will soon be able to choose from a variety of brain enhancements: Human Calculator promises to give you savant-level mathematical abilities; Zen Garden can make you calmer and more efficient. It is also rumored that if clinical trials go as planned, customers will soon be able to purchase an enhancement bundle called Merge — a series of enhancements allowing customers to gradually augment and transfer all of their mental functions to the cloud over a period of five years.

Unfortunately, these brain chips may fail to do their job for two philosophical reasons. The first involves the nature of consciousness. Notice that as you read this, it feels like something to beyou — you are having conscious experience. You are feeling bodily sensations, hearing background noise, seeing the words on the page. Without consciousness, experience itself simply wouldn’t exist.

Many philosophers view the nature of consciousness as a mystery. They believe that we don’t fully understand why all the information processing in the brain feels like something. They also believe that we still don’t understand whether consciousness is unique to our biological substrate, or if other substrates — like silicon or graphene microchips — are also capable of generating conscious experiences.

For the sake of argument, let’s assume microchips are the wrong substrate for consciousness. In this case, if you replaced one or more parts of your brain with microchips, you would diminish or end your life as a conscious being. If this is true, then consciousness, as glorious as it is, may be the very thing that limits our intelligence augmentation. If microchips are the wrong stuff, then A.I.s themselves wouldn’t have this design ceiling on intelligence augmentation — but they would be incapable of consciousness.

You might object, saying that we can still enhance parts of the brain notresponsible for consciousness. It is true that much of what the brain does is nonconscious computation, but neuroscientists suspect that our working memory and attentional systems are part of the neural basis of consciousness. These systems are notoriously slow, processing only about four manageable chunks of information at a time. If replacing parts of these systems with A.I. components produces a loss of consciousness, we may be stuck with our pre-existing bandwidth limitations. This may amount to a massive bottleneck on the brain’s capacity to attend to and synthesize data piping in through chips used in areas of the brain that are not responsible for consciousness.

But let’s suppose that microchips turn out to be the right stuff. There is still a second problem, one that involves the nature of the self. Imagine that, longing for superintelligence, you consider buying Merge. To understand whether you should embark upon this journey, you must first understand what and who you are. But what is a self or person? What allows a self to continue existing over time? Like consciousness, the nature of the self is a matter of intense philosophical controversy. And given your conception of a self or person, would youcontinue to exist after adding Merge — or would you have ceased to exist, having been replaced by someone else? If the latter, why try Merge in the first place?

Even if your hypothetical merger with A.I. brings benefits like superhuman intelligence and radical life extension, it must not involve the elimination of any of what philosophers call “essential properties” — the things that make you you. Even if you would like to become superintelligent, knowingly trading away one or more of your essential properties would be tantamount to suicide — that is, to your intentionally causing yourself to cease to exist. So before you attempt to redesign your mind, you’d better know what your essential properties are.

Unfortunately, there’s no clear answer about what your essential properties might be. Many philosophers sympathize with the “psychological continuity view,” which says that our memories and personality dispositions make us who we are. But this means that if we change our memories or personality in radical ways, the continuity could be broken. Another leading view is that your brain is essential to you, even if there are radical breaks in continuity. But on this view, enhancements like Merge are unsafe, because you are replacing parts of your brain with A.I. components.” NY Times

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

June 2019:

 

 

 

 

 

 

 

 

 

 

 

 

Google and Oracle’s $9 Billion ‘Copyright Case of the Decade’ Could be Headed for the Supreme Court

 

 

 

 

 

 

“Google calls it the “copyright case of the decade.”

“It” is the $9 billion copyright infringement suit Oracle filed against the search giant nearly 10 years ago. Oracle brought the case in 2010 after Google incorporated 11,500 lines of Oracle’s Java code into Google’s Android platform for smartphones and tablets. Android has since become the world’s most popular operating system, running on more than 2.5 billion devices.

Google won twice at the U.S. District Court level. But each time, a federal appeals court overturned the verdict, ruling for Oracle. Now, Google is begging the Supreme Court to hear the case, and so are the 175 companies, nonprofits and individuals who have signed 15 friend-of-the-court briefs supporting Google’s plea.

Here’s the pressing issue: How much protection do copyright laws give to application program interfaces, or APIs? That might sound arcane, but these interfaces are omnipresent in software today. They form the junctions between all the different software applications developed by various companies and independent developers that must seamlessly interact to work right.

All the apps that sit on our smartphones—like Pandora or Uber—use interfaces to communicate with our phones’ operating systems (Apple iOS for iPhones, for example). If the owner of a platform can claim, through copyright, to own those interfaces, it can limit innovation and competition, Google contends. Not only can it determine who gets to write software on its own platform, but, as we’ll see, it may even be able to prevent rival platforms from ever being written. The Harvard Journal ofLaw and Technologyconsiders the case so consequential that it devoted an entire 360-page “special issue” to it last year.

“If the appeals court’s rulings stand, it’s likely to lead to entrenching dominant firms in software industries,” says Randy Stutz, an attorney with the American Antitrust Institute, which supports Google in the dispute.

Oracle, on the other hand, says the case is cut-and-dried. Its basic argument: Google negotiated to take a license for the Java code, it wasn’t able to reach terms, and then it used portions of the code anyway. (And that’s all true.) Now, it’s time to pay the piper.

Before Android,” Oracle’s lawyers write in their brief to the Supreme Court, “every company that wanted to use the Java platform took a commercial license…including smartphone manufacturers BlackBerry, Nokia and Danger.”

Oracle claims that, if not for Android, Oracle’s own Java software could have become a major smartphone platform. (Although Java was written by Sun Microsystems, Oracle acquired Sun in 2010, shortly before bringing this suit.) Oracle’s lawyers mock the notion that the rulings in its favor will spawn any dire consequences. Despite Google’s “sky-is-falling” arguments, they write, the software industry did not crash in the wake of May 2014 or March 2018, when the U.S. Court of Appeals for the Federal Circuit issued the two key rulings that Google seeks to reverse.

In fact, Oracle has enjoyed fervent support from its own friend-of-the-court briefs, including one from BSA, the Software Alliance, which counts companies like Adobe, Apple and IBM among its members.

Remarkably, for a case about software interfaces, the key Supreme Court precedent was decided in 1879. Obviously, that suit didn’t involve a smartphone platform, but it did define the limits of copyright and explain how a copyright differs from a patent. In that dispute, Charles Selden had authored and copyrighted a book laying out a method of bookkeeping. The book included some blank forms that could be used to implement the system. Later, W.C.M. Baker began marketing his own set of forms to implement Selden’s method that were very similar to those in Selden’s book.

Selden’s widow sued Baker for copyright infringement—and lost. Basically, Justice Joseph Bradley explained in the opinion, she was trying to use copyright to protect the ideas contained in Selden’s book. He explained that, while a patent can protect an idea, a copyright protects only expression—in this case, the particular words Selden used to describe his bookkeeping method. “The copyright…cannot give to the author an exclusive right to the methods of operation which he propounds,” the Supreme Court’s decision said. (Selden had not patented his bookkeeping method.) Since Selden had no monopoly on his method, he had no monopoly on the forms needed to carry out that method.

Congress later wrote the Court’s Baker v. Selden ruling into the federal copyright statute, specifying that a copyright cannot “extend to any idea, procedure, process, system, [or] method of operation,” even if that idea is “described” in copyrighted work.

To put it in modern-day terms: Even if Marie Kondo copyrights a book on organizing, she can’t sue you for rolling your clothes.

That’s part of the essential background against which, 140 years later, Oracle’s dispute with Google will be judged. (Incidentally, Oracle did own patents on aspects of Java, and its suit against Google originally included patent claims. But a jury threw those out in 2012, and Oracle did not appeal. So Oracle’s case now stands or falls on its copyright claims.)

To decide Oracle’s case, the Supreme Court will have to look closely at exactly what an application program interface is. Such an interface is composed of two key parts. One part is a shorthand label, in effect, that a software developer can write into a program when he wants a certain task performed. That label will call up a much longer, prewritten module of code that will actually supply the step-by-step instructions for accomplishing a task, which the developer won’t have to write himself. The label is known as a “declaration,” while the longer module it summons into operation is the “implementing code.” Newsweek

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Rise in Unruly Behavior on Planes Is Tied to Stress of Flying

 

 

 

 

 

 

 

 

 

“Flying has increasingly become a world of the haves and have-nots, starting with purchasing a ticket and continuing as passengers are sorted by status to board.

Once on the plane, passengers can see where they fit in the hierarchy, with the seats getting smaller and thinner and legroom tighter with each passing row. Then, there’s the scramble to secure space in the overhead bins.

“By the time you walk down the jet bridge, you are a bundle of nerves,” said Henry Harteveldt, founder of Atmosphere Research Group, a travel analysis firm in San Francisco.

Now, some researchers are arguing that the stresses of flying — and they say income inequality is among them — contribute to an increase in unruly behavior on planes.

Add to that mix the fact that there are fewer flight attendants on duty than there once were. Most domestic flights, said Taylor Garland, a spokeswoman for the Association of Flight Attendants, are run with the minimum number of attendants allowed, which means crews may not be aware of problems.

Airlines fully comply with all federal safety rules and regulations, including those pertaining to crew staffing aboard the aircraft, a spokesman for Airlines for America, the group’s trade association, said.

The International Air Transport Association, an industry trade group with about 290 member airlines, found that there was one disruptive incident for every 1,053 flights in 2017, the last year for which data was available. In 2016, there was one incident for every 1,424 flights.

And every incident can affect passenger and flight safety. In some cases, pilots turn planes around, creating major delays.

“Airplanes are the physical embodiment of a status hierarchy,” said Keith Payne, a professor of psychology and neuroscience at the University of North Carolina at Chapel Hill and the author of “The Broken Ladder: How Inequality Affects the Way We Think, Live and Die.” “They are a social ladder made of aluminum and upholstery in which the rungs are represented by rows of boarding groups and seating classes.”

Crowding,” Mr. Payne added, “is a risk factor for aggression, and more people in a small space make that more likely to happen.”

One general option when an interaction starts to escalate, he noted, is to walk away and cool off. But, he said, on a plane there are few escape routes.

A study, published in 2016, found that multiple classes on an aircraft increased the likelihood of misbehavior. The study by Katherine A. DeCelles, now at the University of Toronto, and Michael I. Norton, at Harvard Business School, was published in the Proceedings of the National Academies of Sciences.

The authors found that the presence of a first-class cabin, in addition to an economy-class cabin, was associated with more frequent air rage incidents. And boarding through the first-class cabin rather than the midsection of a plane increased those incidents.

The air transport association found that disruptive incidents fell into several categories, with ignoring safety regulations, excessive drinking before a flight and smoking the most common. The association also had reports of more severe disruptions, including physical aggression and damage to equipment.

It’s far too easy for unruly events to occur, Mr. Harteveldt said.

Now, British and European Union organizations are starting programs to minimize disruptions. “No one wants to be next to that passenger who has consumed too much alcohol or is aggressive or rude due to other reasons,” said Henk Van Klaveren, head of public affairs at the Airport Operators Association in London.

In July, the Airport Operators Association, the U.K. Travel Retail Forum and the air transport association (and, later, Airlines U.K.) introduced a media campaign to curb excessive drinking. Called One Too Many, the program is expected to return this summer.

The campaign began at 10 airports (14 now participate, including Heathrow) with airport screens and posters and a leaflet distributed by the police. It also appeared on Facebook, Instagram and Snapchat.

One of the more extreme incidents occurred on a Ryanair flight from Dublin to Malta at the end of April. Fights broke out, and inebriated passengers danced on seats and abused the flight crew. The crew requested police assistance while in flight and, when the plane landed, the police removed and detained some of the passengers.

“We will not tolerate unruly or disruptive behavior at any time, and the safety and comfort of our customers, crew and aircraft is our No. 1 priority,” the airline said in a statement. “This is now a matter for the local police.”

The police in Malta said two 23-year-old male passengers had been arraigned before a magistrate. They were accused of boarding an aircraft when drunk, acting in a manner likely to endanger an aircraft or any person and interfering with the aircraft crew’s ability to perform its duties. Each was fined 1,500 euros, or about $1,674.

In March, two hours into a Hawaiian Airlines flight from Honolulu to Los Angeles, two passengers started quarreling in the aisle. Flight attendants tried to move one to another seat.

The pilot enforced security procedures and returned to Oahu. When the plane landed, the flight attendants were treated for injuries, and the passengers who had been fighting were taken into custody. The aircraft finally arrived in Los Angeles five hours late.

What an airline considers acceptable behavior is outlined in its contract of carriage. This document is available online, at airline ticketing facilities or by request from customer service. Contracts may be different for domestic and international flights.

Currently, a serious offense is tried according to the law of the aircraft’s country of registration. The new regulation would consider whether a serious offense had been committed regardless of national registration. Jurisdiction over disruptive passengers would extend beyond the country where the aircraft is registered, to include the destination country. The treaty requires 22 countries for ratification. Currently, 19 have ratified it.

The agreement adds provisions to recover costs from unruly passengers, said William V. O’Connor, a partner at the law firm Cooley in San Diego whose practice includes aviation matters.

“There are issues of proof,” he said, “and airlines are reluctant to involve customers as witnesses.”

There is no timetable for the international treaty to go into effect. “You can’t force signatories or put them on the clock,” Mr. O’Connor said.

Yet “airlines keep crowding people in the same amount of space and keep adding gradations in the space,” Mr. Payne of the University of North Carolina said, even as they create social norms about appropriate flight behavior.” NY Times

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

May 2019:

 

 

 

 

 

 

 

 

 

 

 

5,000-YEAR-OLD FAKE AMBER BEADS FOUND AT SPANISH BURIAL SITES ARE FIRST EXAMPLES OF EUROPEAN JEWELRY FRAUD

 

 

 

 

 

“Archaeologists believe sets of fake amber beads discovered at Spanish burial sites are the first known examples of faked jewelry in European prehistory. 

For the study published in the journal Plos One, archaeologists looked at two sets of centuries-old beads. Two dating back to the third millennium B.C. were found at the artificial cave of La Molina in the city of Lora de Estepa, southern Spain. At the site, 10 people were buried alongside goods including pottery vessels, bone awls and objects carved out of ivory. 

Another four from the second millennium B.C. were found at the Cova del Gegant cave in the coastal town of Sitges, to the southwest of Barcelona. There, almost 2,000 human bones from the Middle Bronze Age thought to belong to 19 people were discovered, as well as pieces of pottery, and ornamental beads made of lignite, coral, amber, shell and gold.

The specimens looked like amber, and many archaeologists were tricked by the beads, study co-author Carlos P. Odriozola, professor at the University of Sevilla Department of Prehistory and Archaeology, explained to Newsweek. But tests revealed the Cova del Gegant beads were in fact a mollusk shell core coated with what is believed to be pine resin, and were found near real amber beads. Meanwhile the La Molina beads were in fact seeds coated in resin.

“With this type of surface coating it was possible to emulate effectively the translucence, shine and color that made amber such an appreciated material,” the authors wrote. Similar methods of imitating turquoise in the Levant from the sixth millennium B.C. have been identified in past research.

“These two archaeological sites resemble each other in many ways despite the geographic and chronological differences and point to technical practices and knowledge in Late Prehistory that would have been more frequent than tends to be documented, as other finds in the Near East have shown,” they authors said.

Odriozola explained that people started to trade commodities and valuables as farming emerged as a way of life in Europe thousands of years ago. Amber, which is made of fossilized tree resin, was a highly valuable material exchanged across the continent and used by leaders to cultivate an image of power and flaunt their wealth. Succinite amber was brought into Spain from the Baltic sea and simetite amber from Sicily. This route also brought in Asiatic and African ivory, Alpine jade, and cinnabar.

“The fact that a wealthy individual from La Molina cave was buried with an amber-like artifact opens a pathway to research on the trading network and the role of middleman (traders) and valuable supply systems in these communities from the third millennium B.C.,” Odriozola said.

“Something similar happens in Cova del Gegant where individuals from the second millennium B.C. were buried with two genuine amber beads and four amber-like beads.”

So why were the counterfeit beads created? Experts aren’t sure. The authors remarked that it is curious that “exotic materials” were found in both tomb sites, suggesting the dead were wealthy enough to afford rare goods and genuine amber.

Odriozola suggested it could be that there wasn’t enough amber to meet demand, or perhaps traders simply wanted to cheat the wealthy out of money.

The study should encourage experts to reconsider whether valuable prehistoric items really are what they appear to be, said Odriozola.  

Peter van Dommelen, professor of archaeology and anthropology at Brown University who was not involved in the research, told Newsweek: “This study is new and interesting but not necessarily significant in its own right. The really significant part was done five to 10 years ago when it was shown that amber in this early prehistoric period did not come from the Baltic but from Sicily.

“This find adds an interesting touch to it, and opens the door to a better understanding of how and why such items were traded and ‘faked’—the association with North African ivory is crucial in this regard, and the North African involvement is perhaps the major insight that is coming out.”

He argued more than scientific analysis looking at the provenience of materials needs to be done, and the study needs to be followed up by more sophisticated interpretive frameworks. 

Archaeologists believe humans have been adorning themselves with jewelry for hundreds of thousands of years as a means of communication: whether using a ring to indicate marriage or to reflect their social status, ARC DECRA research fellow at Griffith University.” Newsweek

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

How the Internet Travels Across Oceans

 

 

 

 

 

 

 

By ADAM SATARIANO

“The internet consists of tiny bits of code that move around the world, traveling along wires as thin as a strand of hair strung across the ocean floor. The data zips from New York to Sydney, from Hong Kong to London, in the time it takes you to read this word.

Nearly 750,000 miles of cable already connect the continents to support our insatiable demand for communication and entertainment. Companies have typically pooled their resources to collaborate on undersea cable projects, like a freeway for them all to share.

But now Google is going its own way, in a first-of-its-kind project connecting the United States to Chile, home to the company’s largest data center in Latin America.

“People think that data is in the cloud, but it’s not,” said Jayne Stowell, who oversees construction of Google’s undersea cable projects. “It’s in the ocean.”

Getting it there is an exacting and time-intensive process. A 456-foot ship named Durable will eventually deliver the cable to sea. But first, the cable is assembled inside a sprawling factory a few hundred yards away, in Newington, N.H. The factory, owned by the company SubCom, is filled with specialized machinery used to maintain tension in the wire and encase it in protective skin.

The cables begin as a cluster of strands of tiny threads of glass fibers. Lasers propel data down the threads at nearly the speed of light, using fiber-optic technology. After reaching land and connecting with an existing network, the data needed to read an email or open a web page makes its way onto a person’s device.

While most of us now largely experience the internet through Wi-Fi and phone data plans, those systems eventually link up with physical cables that swiftly carry the information across continents or across oceans.

In the manufacturing process, the cables move through high-speed mills the size of jet engines, wrapping the wire in a copper casing that carries electricity across the line to keep the data moving. Depending on where the cable will be located, plastic, steel and tar are added later to help it withstand unpredictable ocean environments. When finished, the cables will end up the size of a thick garden hose.

A year of planning goes into charting a cable route that avoids underwater hazards, but the cables still have to withstand heavy currents, rock slides, earthquakes and interference from fishing trawlers. Each cable is expected to last up to 25 years.

A conveyor that staff members call “the Cable Highway” moves the cable directly into Durable, docked in the Piscataqua River. The ship will carry over 4,000 miles of cable weighing about 3,500 metric tons when fully loaded.

Inside the ship, workers spool the cable into cavernous tanks. One person walks the cable swiftly in a circle, as if laying out a massive garden hose, while others lie down to hold it in place to ensure it doesn’t snag or knot. Even with teams working around the clock, it takes about four weeks before the ship is loaded up with enough cable to hit the open sea.

The first trans-Atlantic cable was completed in 1858 to connect the United States and Britain. Queen Victoria commemorated the occasion with a message to President James Buchanan that took 16 hours to transmit.

While new wireless and satellite technologies have been invented in the decades since, cables remain the fastest, most efficient and least expensive way to send information across the ocean. And it is still far from cheap: Google would not disclose the cost of its project to Chile, but experts say subsea projects cost up to $350 million, depending on the length of the cable.

In the modern era, telecommunications companies laid most of the cable, but over the past decade American tech giants started taking more control. Google has backed at least 14 cables globally. Amazon, Facebook and Microsoft have invested in others, connecting data centers in North America, South America, Asia, Europe and Africa, according to TeleGeography, a research firm.

Countries view the undersea cables as critical infrastructure and the projects have been flash points in geopolitical disputes. Last year, Australia stepped in to block the Chinese technology giant Huawei from building a cable connecting Australia to the Solomon Islands, for fear it would give the Chinese government an entry point into its networks.

Yann Durieux, a ship captain, said one of his most important responsibilities was keeping morale up among his crew during the weeks at sea. Building the infrastructure of our digital world is a labor-intensive job.

With 53 bedrooms and 60 bathrooms, the Durable can hold up to 80 crew members. The team splits into two 12-hour shifts. Signs warn to be quiet in the hallways because somebody is always sleeping.

The ship will carry enough supplies to last at least 60 days: roughly 200 loaves of bread, 100 gallons of milk, 500 cartons of a dozen eggs, 800 pounds of beef, 1,200 pounds of chicken and 1,800 pounds of rice. There’s also 300 rolls of paper towels, 500 rolls of toilet paper, 700 bars of soap and almost 600 pounds of laundry detergent. No alcohol is allowed on board.

“I still get seasick,” said Walt Oswald, a technician who has been laying cables on ships for 20 years. He sticks a small patch behind his ear to hold back the nausea. “It’s not for everybody.”

Poor weather is inevitable. Swells reach up to 20 feet, occasionally requiring the ship captain to order the subsea cable to be cut so the ship can seek safer waters. When conditions improve, the ship returns, retrieving the cut cable that has been left attached to a floating buoy, then splicing it back together before continuing on.

Work on board is slow and plodding. The ship, at sea for months at a time, moves about six miles per hour, as the cables are pulled from the giant basins out through openings at the back of the ship. Closer to shore, where there’s more risk of damage, an underwater plow is used to bury the cable in the sea floor.

The internet consists of tiny bits of code that move around the world, traveling along wires as thin as a strand of hair strung across the ocean floor. The data zips from New York to Sydney, from Hong Kong to London, in the time it takes you to read this word.

Nearly 750,000 miles of cable already connect the continents to support our insatiable demand for communication and entertainment. Companies have typically pooled their resources to collaborate on undersea cable projects, like a freeway for them all to share.

But now Google is going its own way, in a first-of-its-kind project connecting the United States to Chile, home to the company’s largest data center in Latin America.

“People think that data is in the cloud, but it’s not,” said Jayne Stowell, who oversees construction of Google’s undersea cable projects. “It’s in the ocean.”

Getting it there is an exacting and time-intensive process. A 456-foot ship named Durable will eventually deliver the cable to sea. But first, the cable is assembled inside a sprawling factory a few hundred yards away, in Newington, N.H. The factory, owned by the company SubCom, is filled with specialized machinery used to maintain tension in the wire and encase it in protective skin.

The cables begin as a cluster of strands of tiny threads of glass fibers. Lasers propel data down the threads at nearly the speed of light, using fiber-optic technology. After reaching land and connecting with an existing network, the data needed to read an email or open a web page makes its way onto a person’s device.

While most of us now largely experience the internet through Wi-Fi and phone data plans, those systems eventually link up with physical cables that swiftly carry the information across continents or across oceans.

In the manufacturing process, the cables move through high-speed mills the size of jet engines, wrapping the wire in a copper casing that carries electricity across the line to keep the data moving. Depending on where the cable will be located, plastic, steel and tar are added later to help it withstand unpredictable ocean environments. When finished, the cables will end up the size of a thick garden hose.

A year of planning goes into charting a cable route that avoids underwater hazards, but the cables still have to withstand heavy currents, rock slides, earthquakes and interference from fishing trawlers. Each cable is expected to last up to 25 years.

A conveyor that staff members call “the Cable Highway” moves the cable directly into Durable, docked in the Piscataqua River. The ship will carry over 4,000 miles of cable weighing about 3,500 metric tons when fully loaded.

Inside the ship, workers spool the cable into cavernous tanks. One person walks the cable swiftly in a circle, as if laying out a massive garden hose, while others lie down to hold it in place to ensure it doesn’t snag or knot. Even with teams working around the clock, it takes about four weeks before the ship is loaded up with enough cable to hit the open sea.

The first trans-Atlantic cable was completed in 1858 to connect the United States and Britain. Queen Victoria commemorated the occasion with a message to President James Buchanan that took 16 hours to transmit.

While new wireless and satellite technologies have been invented in the decades since, cables remain the fastest, most efficient and least expensive way to send information across the ocean. And it is still far from cheap: Google would not disclose the cost of its project to Chile, but experts say subsea projects cost up to $350 million, depending on the length of the cable.

In the modern era, telecommunications companies laid most of the cable, but over the past decade American tech giants started taking more control. Google has backed at least 14 cables globally. Amazon, Facebook and Microsoft have invested in others, connecting data centers in North America, South America, Asia, Europe and Africa, according to TeleGeography, a research firm.

Countries view the undersea cables as critical infrastructure and the projects have been flash points in geopolitical disputes. Last year, Australia stepped in to block the Chinese technology giant Huawei from building a cable connecting Australia to the Solomon Islands, for fear it would give the Chinese government an entry point into its networks.

Content providers like Microsoft, Google, Facebookand Amazon now own or lease more than half of the undersea bandwidth.” NY Times

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

April 2019:

 

 

 

 

 

 

 

 

 

 

 

 

 

A Mysterious Infection, Spanning the Globe in a Climate of Secrecy

 

 

The rise of Candida auris embodies a serious and growing public health threat: drug-resistant germs.

 

 

 

 

 

The first time doctors encountered C. auris was in the ear of a woman in Japan in 2009 (auris is Latin for ear). It seemed innocuous at the time, a cousin of common, easily treated fungal infections

The C.D.C. investigators theorized that C. auris started in Asia ( India, China and Japan) and spread across the globe.

 

 

 

 

 

 

By Matt Richtel & Andrew Jacobs

 

 

“Last May, an elderly man was admitted to the Brooklyn branch of Mount Sinai Hospital for abdominal surgery. A blood test revealed that he was infected with a newly discovered germ as deadly as it was mysterious. Doctors swiftly isolated him in the intensive care unit.

The germ, a fungus called Candida auris, preys on people with weakened immune systems, and it is quietly spreading across the globe. Over the last five years, it has hit a neonatal unit in Venezuela, swept through a hospital in Spain, forced a prestigious British medical center to shut down its intensive care unit, and taken root in India, Pakistan and South Africa.

Recently C. auris reached New York and Illinois, leading the federal Centers for Disease Control and Prevention to add it to a list of germs deemed “urgent threats.”

The man at Mount Sinai died after 90 days in the hospital, but C. auris did not. Tests showed it was everywhere in his room, so invasive that the hospital needed special cleaning equipment and had to rip out some of the ceiling and floor tiles to eradicate it.

Everything was positive — the walls, the bed, the doors, the curtains, the phones, the sink, the whiteboard, the poles, the pump,” said Dr. Scott Lorin, the hospital’s president. “The mattress, the bed rails, the canister holes, the window shades, the ceiling, everything in the room was positive.”

C. auris is so tenacious, in part, because it is impervious to major antifungal medications, making it a new example of one of the world’s most intractable health threats: the rise of drug-resistant infections.

For decades, public health experts have warned that the overuse of antibiotics was reducing the effectiveness of drugs that have lengthened life spans by curing bacterial infections once commonly fatal. But lately, there has been an explosion of resistant fungi as well, adding a new and frightening dimension to a phenomenon that is undermining a pillar of modern medicine.

“It’s an enormous problem,” said Matthew Fisher, a professor of fungal epidemiology at Imperial College London, who was a co-author of a recent scientific review on the rise of resistant fungi. “We depend on being able to treat those patients with antifungals.”

Simply put, fungi, just like bacteria, are evolving defenses to survive modern medicines.

Yet even as world health leaders have pleaded for more restraint in prescribing antimicrobial drugs to combat bacteria and fungi — convening the United Nations General Assembly in 2016 to manage an emerging crisis — gluttonous overuse of them in hospitals, clinics and farming has continued.

Resistant germs are often called “superbugs,” but this is simplistic because they don’t typically kill everyone. Instead, they are most lethal to people with immature or compromised immune systems, including newborns and the elderly, smokers, diabetics and people with autoimmune disorders who take steroids that suppress the body’s defenses.

Scientists say that unless more effective new medicines are developed and unnecessary use of antimicrobial drugs is sharply curbed, risk will spread to healthier populations. A study the British government funded projects that if policies are not put in place to slow the rise of drug resistance, 10 million people could die worldwide of all such infections in 2050, eclipsing the eight million expected to die that year from cancer.

In the United States, two million people contract resistant infections annually, and 23,000 die from them, according to the official C.D.C. estimate. That number was based on 2010 figures; more recent estimates from researchers at Washington University School of Medicine put the death toll at 162,000. Worldwide fatalities from resistant infections are estimated at 700,000. Antibiotics and antifungals are both essential to combat infections in people, but antibiotics are also used widely to prevent disease in farm animals, and antifungals are also applied to prevent agricultural plants from rotting. Some scientists cite evidence that rampant use of fungicides on crops is contributing to the surge in drug-resistant fungi infecting humans.

Yet as the problem grows, it is little understood by the public — in part because the very existence of resistant infections is often cloaked in secrecy.

With bacteria and fungi alike, hospitals and local governments are reluctant to disclose outbreaks for fear of being seen as infection hubs. Even the C.D.C., under its agreement with states, is not allowed to make public the location or name of hospitals involved in outbreaks. State governments have in many cases declined to publicly share information beyond acknowledging that they have had cases.

All the while, the germs are easily spread — carried on hands and equipment inside hospitals; ferried on meat and manure-fertilized vegetables from farms; transported across borders by travelers and on exports and imports; and transferred by patients from nursing home to hospital and back.

C. auris, which infected the man at Mount Sinai, is one of dozens of dangerous bacteria and fungi that have developed resistance.

Other prominent strains of the fungus Candida — one of the most common causes of bloodstream infections in hospitals — have not developed significant resistance to drugs, but more than 90 percent of C. auris infections are resistant to at least one drug, and 30 percent are resistant to two or more drugs, the C.D.C. said.

Dr. Lynn Sosa, Connecticut’s deputy state epidemiologist, said she now saw C. auris as “the top” threat among resistant infections. “It’s pretty much unbeatable and difficult to  identify,”   she said.

Nearly half of patients who contract C. auris die within 90 days, according to the C.D.C. Yet the world’s experts have not nailed down where it came from in the first place.

“It is a creature from the black lagoon,” said Dr. Tom Chiller, who heads the fungal branch at the C.D.C., which is spearheading a global detective effort to find treatments and stop the spread. “It bubbled up and now it is everywhere.”

No need’ to tell the public

In late 2015, Dr. Johanna Rhodes, an infectious disease expert at Imperial College London, got a panicked call from the Royal Brompton Hospital, a British medical center outside London. C. auris had taken root there months earlier, and the hospital couldn’t clear it.

“‘We have no idea where it’s coming from. We’ve never heard of it. It’s just spread like wildfire,’” Dr. Rhodes said she was told. She agreed to help the hospital identify the fungus’s genetic profile and clean it from rooms.

Under her direction, hospital workers used a special device to spray aerosolized hydrogen peroxide around a room used for a patient with C. auris, the theory being that the vapor would scour each nook and cranny. They left the device going for a week. Then they put a “settle plate” in the middle of the room with a gel at the bottom that would serve as a place for any surviving microbes to grow, Dr. Rhodes said.

Only one organism grew back. C. auris.

It was spreading, but word of it was not. The hospital, a specialty lung and heart center that draws wealthy patients from the Middle East and around Europe, alerted the British government and told infected patients, but made no public announcement.

There was no need to put out a news release during the outbreak,” said Oliver Wilkinson, a spokesman for the hospital.

This hushed panic is playing out in hospitals around the world. Individual institutions and national, state and local governments have been reluctant to publicize outbreaks of resistant infections, arguing there is no point in scaring patients — or prospective ones.

Dr. Silke Schelenz, Royal Brompton’s infectious disease specialist, found the lack of urgency from the government and hospital in the early stages of the outbreak “very, very frustrating.”

“They obviously didn’t want to lose reputation,” Dr. Schelenz said. “It hadn’t impacted our surgical outcomes.”

The role of pesticides?

As the C.D.C. works to limit the spread of drug-resistant C. auris, its investigators have been trying to answer the vexing question: Where in the world did it come from?

The first time doctors encountered C. auris was  in the ear of a woman in Japan in 2009 (auris is Latin for ear). It seemed innocuous at the time, a cousin of common, easily treated fungal infections.

Three years later, it appeared in an unusual test result in the lab of Dr Meis a microbiologist in Nijmegen, the Netherlands, who was analyzing a bloodstream infection in 18 patients from four hospitals in India. Soon, new clusters of C. auris seemed to emerge with each passing month in different parts of the world.

The C.D.C. investigators theorized that C. auris started in Asia and spread across the globe. But when the agency compared the entire genome of auris samples from India and Pakistan, Venezuela, South Africa and Japan, it found that its origin was not a single place, and there was not a single auris strain.

The genome sequencing showed that there were four distinctive versions of the fungus, with differences so profound that they suggested that these strains had diverged thousands of years ago and emerged as resistant pathogens from harmless environmental strains in four different places at the same time.

“Somehow, it made a jump almost seemingly simultaneously, and seemed to spread and it is drug resistant, which is really mind-boggling,” Dr. Vallabhaneni said.

There are different theories as to what happened with C. auris. Dr. Meis, the Dutch researcher, said he believed that drug-resistant fungi were developing thanks to heavy use of fungicides on crops.

Dr. Meis became intrigued by resistant fungi when he heard about the case of a 63-year-old patient in the Netherlands who died in 2005 from a fungus called Aspergillosis. It proved resistant to a front-line antifungal treatment called itraconazole. That drug is a virtual copy of the azole pesticides that are used to dust crops the world over and account for more than one third of all fungicide sales.

A 2013 paper in Plos Pathogens said that it appeared to be no coincidence that drug-resistant Aspergillus was showing up in the environment where the azole fungicides were used. The fungus appeared in 12 percent of Dutch soil samples, for example, but also in “flower beds, compost, leaves, plant seeds, soil samples of tea gardens, paddy fields, hospital surroundings, and aerial samples of hospitals.”

Dr. Meis visited the C.D.C. last summer to share research and theorize that the same thing is happening with C. auris, which is also found in the soil: Azoles have created an environment so hostile that the fungi are evolving, with resistant strains surviving.

This is similar to concerns that resistant bacteria are growing because of excessive use of antibiotics in livestock for health and growth promotion. As with antibiotics in farm animals, azoles are used widely on crops.

“On everything — potatoes, beans, wheat, anything you can think of, tomatoes, onions,” said Dr. Rhodes, the infectious disease specialist who worked on the London outbreak. “We are driving this with the use of antifungicides on crops.”

Dr. Chiller theorizes that C. auris may have benefited from the heavy use of fungicides. His idea is that C. auris actually has existed for thousands of years, hidden in the world’s crevices, a not particularly aggressive bug. But as azoles began destroying more prevalent fungi, an opportunity arrived for C. auris to enter the breach, a germ that had the ability to readily resist fungicides now suitable for a world in which fungi less able to resist are under attack.

The mystery of C. auris’s emergence remains unsolved, and its origin seems, for the moment, to be less important than stopping its spread.

Resistance and denial

For now, the uncertainty around C. auris has led to a climate of fear, and sometimes denial.

Last spring, Jasmine Cutler, 29, went to visit her 72-year-old father at a hospital in New York City, where he had been admitted because of complications from a surgery the previous month.

When she arrived at his room, she discovered that he had been sitting for at least an hour in a recliner, in his own feces, because no one had come when he had called for help to use the bathroom. Ms. Cutler said it became clear to her that the staff was afraid to touch him because a test had shown that he was carrying C. auris.

“I saw doctors and nurses looking in the window of his room,” she said. “My father’s not a guinea pig. You’re not going to treat him like a freak at a show.”

He was eventually discharged and told he no longer carried the fungus. But he declined to be named, saying he feared being associated with the frightening infection.”

NY Times

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Could an eye doctor diagnose Alzheimer’s before you have symptoms?

 

 

 

 

 

“A quick eye exam might one day allow eye doctors to check up on both your eyeglasses prescription and your brain health.

A study of more than 200 people at the Duke Eye Center published March 11 in the journal Ophthalmology Retina suggests the loss of blood vessels in the retina could signal Alzheimer’s disease.

In people with healthy brains, microscopic blood vessels form a dense web at the back of the eye inside the retina, as seen in 133 participants in a control group.

In the eyes of 39 people with Alzheimer’s disease, that web was less dense and even sparse in places. The differences in density were statistically significant after researchers controlled for factors including age, sex, and level of education, said Duke ophthalmologist and retinal surgeon Sharon Fekrat, M.D., the study’s senior author.

We’re measuring blood vessels that can’t be seen during a regular eye exam and we’re doing that with relatively new noninvasive technology that takes high-resolution images of very small blood vessels within the retina in just a few minutes,” she said. “It’s possible that these changes in blood vessel density in the retina could mirror what’s going on in the tiny blood vessels in the brain, perhaps before we are able to detect any changes in cognition.”

The study found differences in the retinas of those with Alzheimer’s disease when compared to healthy people and to those with mild cognitive impairment, often a precursor to Alzheimer’s disease.

With nearly 6 million Americans living with Alzheimer’s disease and no viable treatments or noninvasive tools for early diagnosis, its burden on families and the economy is heavy. Scientists at Duke Eye Center and beyond have studied other changes in the retina that could signal trouble upstream in the brain, such as thinning of some of the retinal nerve layers.

“We know that there are changes that occur in the brain in the small blood vessels in people with Alzheimer’s disease, and because the retina is an extension of the brain, we wanted to investigate whether these changes could be detected in the retina using a new technology that is less invasive and easy to obtain,” said Grewal, M.D., a Duke ophthalmologist and retinal surgeon and a lead author on the study. The Duke study used a noninvasive technology called optical coherence tomography angiography (OCTA). OCTA machines use light waves that reveal blood flow in every layer of the retina.

An OCTA scan could even reveal changes in tiny capillaries — most less than half the width of a human hair — before blood vessel changes show up on a brain scan such as an MRI or cerebral angiogram, which highlight only larger blood vessels. Such techniques to study the brain are invasive and costly.” Neuroscience Journal

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

March 2019:

 

 

 

 

 

 

 

 

 

 

 

 

Unnecessarily Prescribed Antibiotics

 

 

 

 

 

 

 

 

 

 

The drugs are not just overprescribed. They often pose special risks to older patients, including tendon problems, nerve damage and mental health issues.

“Last month, Caryn Isaacs went to see her primary care doctor for her annual Medicare wellness visit. A patient advocate who lives in Manhattan, Ms. Isaacs, 68, felt perfectly fine and expected a clean bill of health.

But her doctor, who’d ordered a variety of blood and urine tests, said she had a urinary tract infection and prescribed an antibiotic.

“The nurse said, ‘Can you take Cipro?’” Ms. Isaacs recalled. “I didn’t have any reason not to, so I said yes.”

There are actually plenty of reasons for older people to avoid Cipro and other antibiotics known as fluoroquinolones, which have prompted warnings from the Food and Drug Administration about their risks of serious side effects.

And there are good reasons to avoid any antibiotic when bacteria is detected in a urine culture in a patient who has no other signs of infection. So-called asymptomatic bacteriuria increases with age, but these women are not sick and don’t need drugs, so medical guidelines recommend against routine screening or treatment. 

Yet Ms. Isaac’s prescription was hardly unusual. Despite ongoing campaigns by the Centers for Disease Control and Prevention and other public health groups, older Americans still take too many antibiotics.

Patients over age 65 have the highest rate of outpatient prescribing of any age group. A new C.D.C. study, published in the Journal of the American Geriatrics Society, points out that doctors write enough antibiotic prescriptions annually — nearly 52 million in 2014 —for every older person to get at least one. 

Because the researchers used a national pharmacy database that tracked only outpatients, the study likely underestimates the problem. “The volume would be higher if you included hospitals and nursing homes and other long-term care settings,” said Katherine Fleming-Dutra, deputy director of the C.D.C.’s Office of Antibiotic Stewardship.

Glass-half-full types might be pleased to see that after climbing 30 percent from 2000 to 2010, antibiotic prescriptions for older adults leveled off between 2011 and 2014. “That’s potentially good news,” said Dr. Sarah Kabbani, an infectious disease specialist at the C.D.C. and lead author of the study.

But what public health advocates want to see is a decline, as has happened with young children, once the group most likely to use antibiotics.

“It’s hard to feel heartened about a plateau when overuse remains so prevalent,” said Dr. Caleb Alexander, co-director of the Johns Hopkins Center for Drug Safety and Effectiveness. “It’s as perennial as the grass.”

Antibiotic overuse contributes to a serious public health threat by creating drug resistance, as infectious bacteria adapt to the medications. Drugs then lose their effectiveness, forcing doctors to resort to more toxic, less potent, often costlier options. Two million Americans get antibiotic-resistant infections annually, the C.D.C. has reported, and 23,000 die from them.

Moreover, antibiotics interact badly with many of the other drugs older adults take, including such widely used medications as statins, blood thinners, kidney and heart medications. “The number of potential drug-drug interactions with antibiotics are vast,” Dr. Alexander cautioned.

Some antibiotics also have dismaying, even alarming, side effects in themselves. In 2013, the F.D.A. issued a warning about azithromycin, which in rare cases leads to dangerous heart arrhythmias. 

But for more than a decade, the agency’s most frequent target has been fluoroquinolone.

It has warned that this class of antibiotics (including Cipro and Levaquin) increases the risk of tendinitis and tendon rupture, particularly in older adults; that it can cause the nerve damage called peripheral neuropathy; and that it can lead to hypoglycemia (low blood sugar).

“One of the most common problems for older adults are changes in mental status — getting anxious, getting loopy,” said Dr. Sara Cosgrove, medical director of the Johns Hopkins Hospital’s Adult Antimicrobial Stewardship Program. “These drugs get into the brain.” The F.D.A. also warned of the problem in July.

In fact, the agency advised in 2016 that fluoroquinolones’ potential side effects outweighed their benefits for several common infections. Last year, it added still another warning about ruptures of tears in the aorta, a rare but serious condition for which older people are at greater risk.

Fluoroquinolones are also most implicated in the rampant, difficult-to-cure infection called C. difficile, along with an earlier antibiotic, clindamycin. C. difficile, too, occurs more frequently in older people. 

Yet what class of antibiotics did the C.D.C. team determine was most commonly prescribed for older adults? Fluoroquinolones. (The most used single drug was azithromycin, marketed as Zithromax, which isn’t a quinolone.)

More troublingly, doctors often prescribe these medications unnecessarily, studies repeatedly show. Upper respiratory infections — colds, sinus infections, bronchitis — trigger most prescriptions, but those infections are typically viral, not bacterial, and thus impervious to antibiotics.” NY Times

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Forgetting uses more brain power than remembering

 

 

 

 

 

 

“Choosing to forget something might take more mental effort than trying to remember it, researchers at The University of Texas at Austin discovered through neuroimaging.

These findings, published in the Journal of Neuroscience, suggest that in order to forget an unwanted experience, more attention should be focused on it. This surprising result extends prior research on intentional forgetting, which focused on reducing attention to the unwanted information through redirecting attention away from unwanted experiences or suppressing the memory’s retrieval.

“We may want to discard memories that trigger maladaptive responses, such as traumatic memories, so that we can respond to new experiences in more adaptive ways,” said Jarrod Lewis-Peacock, the study’s senior author and an assistant professor of psychology at UT Austin. “Decades of research has shown that we have the ability to voluntarily forget something, but how our brains do that is still being questioned. Once we can figure out how memories are weakened and devise ways to control this, we can design treatment to help people rid themselves of unwanted memories.”

Memories are not static. They are dynamic constructions of the brain that regularly get updated, modified and reorganized through experience. The brain is constantly remembering and forgetting information — and much of this happens automatically during sleep.

When it comes to intentional forgetting, prior studies focused on locating “hotspots” of activity in the brain’s control structures, such as the prefrontal cortex, and long-term memory structures, such as the hippocampus. The latest study focuses, instead, on the sensory and perceptual areas of the brain, specifically the ventral temporal cortex, and the patterns of activity there that correspond to memory representations of complex visual stimuli.” Neuroscience Journal

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Heart Healthy Diets in Early Adulthood Linked to Better Brain Function in Middle Age

 

 

 

 

 

“Eating a diet rich in fruits and vegetables, moderate in nuts, fish and alcohol and low in meat and full-fat dairy is associated with better cognitive performance in middle age, according to a study published in the March 6, 2019, online issue of Neurology. Cognitive abilities include thinking and memory skills.

Source: American Academy of Neurology.

“Our findings indicate that maintaining good dietary practices throughout adulthood can help to preserve brain health at midlife” said study author Claire T. McEvoy, PhD.

The study involved 2,621 people who were an average age of 25 at the start and were then followed for 30 years. They were asked about their diet at the beginning of the study and again seven and 20 years later. The participants’ cognitive function were tested twice, when they were about 50 and 55 years old.

The participants’ dietary patterns were evaluated to see how closely they adhered to three heart-healthy diets: the Mediterranean diet, the Dietary Approaches to Stop Hypertension (DASH) diet and diet quality score designed as part of the study called the CARDIA a priori Diet Quality Score, or APDQS.

The Mediterranean diet emphasizes whole grains, fruits, vegetables, healthy unsaturated fats, nuts, legumes and fish and limits red meat, poultry and full-fat dairy.

The DASH diet emphasizes grains, vegetables, fruits, low-fat dairy, legumes and nuts and limits meat, fish, poultry, total fat, saturated fat, sweets and sodium.

The APDQS diet emphasizes fruits, vegetables, legumes, low-fat dairy, fish, and moderate alcohol, and limits fried foods, salty snacks, sweets, high-fat dairy and sugar-sweetened soft drinks.

For each diet, study participants were divided into one of three groups – low, medium or high adherence score – based on how closely they followed the diet.

The researchers found that people who followed the Mediterranean diet and the APDQS diet, but not the DASH diet, had less 5-year decline in their cognitive function at middle-age.

People with high adherence to the Mediterranean diet were 46 percent less likely to have poor thinking skills than people with low adherence to the diet. Of the 868 people in the high group, 9 percent had poor thinking skills, compared to 29 percent of the 798 people in the low group.

People with high adherence to the APDQS diet were 52 percent less likely to have poor thinking skills than people with low adherence to the diet. Of the 938 people in the high group, 6 percent had poor thinking skills, compared to 32 percent of the 805 people in the low group. The results were adjusted for other factors that could affect cognitive function, such as the level of education, smoking, diabetes and physical activity.

McEvoy noted large differences in fruit and vegetable intake between the low and high groups for the diets. For the Mediterranean diet, the low group had an average of 2.3 servings of fruit per day and 2.8 of vegetables, compared to 4.2 servings of fruit and 4.4 of vegetables for the high group. For the APDQS diet, the low group ate 2.7 servings of fruit and 4.3 of vegetables, compared to 3.7 and 4.4 for the high group.” Neuroscience Journal website 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Amazon’s Hard Bargain Extends Far Beyond New York

 

 

 

 

 

 

“When Texas officials pushed Amazon to pay nearly $270 million in back sales taxes in 2010, Amazon responded by closing its only warehouse in the state and scrapping expansion plans there. Two years later, the officials agreed to waive the past taxes in exchange for Amazon opening new warehouses.

A similar scene played out in South Carolina, where officials decided in 2011 to deny Amazon a sales tax break. After threatening to stop hiring in the state, the company got the tax exemption by promising to hire more people.

And last year in Seattle, the company’s hometown, Amazon halted plans to build one tower and threatened to lease out one under construction when local officials pushed a tax on large employers. The City Council passed a smaller version of the tax, but the company helped finance a successful opposition to repeal it. Now, Amazon plans to lease out its space in the tower under construction anyway.

In New York, Mayor Bill de Blasio called it a “shock to the system” when Amazon, facing criticism for the deal it reached to build a headquarters in the city, abruptly dropped the plans. Gov. Andrew M. Cuomo is still trying to woo them back. But the reversal mirrored the company’s interactions with officials in other states.

Virtually all of America’s largest businesses drive a hard bargain with governments, angling for benefits and financial incentives. Amazon, though, often plays politics with a distinctive message: Give us what we want, or we’ll leave and take our jobs elsewhere.

The tactics help Amazon squeeze as much as possible out of politicians.

“They are just as cutthroat as can be,” said Alex Pearlstein, vice president at Market Street Services, which helps cities, including those with Amazon warehouses, attract employers.

New York’s experience with Amazon also exposed the company’s limited experience with building community relationships. The company did not hire any local employees or lobbyists to connect with New York residents in advance of announcing the deal. Until recent years, almost no one at the company worked full-time in community or government relations, though it now has more than 100 lobbyists registered in statehouses to push its priorities.

That lack of a significant on-the-ground strategy helped doom the deal in New York, and it is causing headaches elsewhere.

Amazon’s promise to deliver practically any item within two days means that it needs warehouses near major population centers, not just where it gets the best deal. In Edison, N.J., noise complaints pressured the company to spend $3 million to build a high wall around a warehouse. Outside of Chicago in Joliet, Ill., Amazon pays for an extra police officer to help manage traffic — and lawmakers want the company to do more.

In 2010, Texas’s top finance official said Amazon owed $269 million because it had failed to pay sales taxes from 2005 to 2009. Amazon said it did not need to collect the tax because it lacked brick-and-mortar stores in the state. It then shuttered its warehouse at the Dallas-Fort Worth International Airport that employed about 120 people and dropped plans to build more outposts in Texas.

Under a settlement in 2012, the state gave up on the tax charges in exchange for Amazon’s promise to create 2,500 jobs and spend at least $200 million on facilities. Amazon also agreed to begin collecting sales tax and pay the state.” NYTimes

 

 

 

 

 

 

 

 

 

 

 

What is a good job?

 

 

 

 

 

 

“THERE IS A raging debate — on newspaper pages, inside Silicon Valley — as to what constitutes a “good job.” I’m an investigative business reporter, and so I have a strange perspective on this question. When I speak to employees at a company, it’s usually because something has gone wrong. My stock-in-trade are sources who feel their employers are acting unethically or ignoring sound advice. The workers who speak to me are willing to describe both the good and the bad in the places where they work, in the hope that we will all benefit from their insights.

What’s interesting to me, though, is that these workers usually don’t come across as unhappy. When they agree to talk to a journalist — to share confidential documents or help readers understand how things went awry — it’s not because they hate their employers or are overwhelmingly disgruntled. They often seem to love their jobs and admire the companies they work for. They admire them enough, in fact, to want to help them improve. They are engaged and content. They believe what they are Do these people have “good jobs”? Are they luckier or less fortunate than my $1.2 million friend, who couldn’t care less about his firm? Are Google employees who work 60 hours a week but who can eat many of their meals (or freeze their eggs) on the company’s dime more satisfied than a start-up founder in Des Moines who cleans the office herself but sees her dream become reality?As the airwaves heat up in anticipation of the 2020 election, Americans are likely to hear a lot of competing views about what a “good job” entails. Some will celebrate billionaires as examples of this nation’s greatness, while others will pillory them as evidence of an economy gone astray. Through all of that, it’s worth keeping in mind that the concept of a “good job” is inherently complicated, because ultimately it’s a conversation about what we value, whether individually or collectively. Even for Americans who live frighteningly close to the bone, like the janitors studied by Wrzesniewski and Dutton, a job is usually more than just a means to a paycheck. It’s a source of purpose and meaning, a place in the world.There’s a possibility, when it comes to understanding good jobs, that we have it all wrong. When I was speaking to my H.B.S. classmates, one of them reminded me about some people at our reunion who seemed wholly unmiserable — who seemed, somewhat to their own surprise, to have wound up with jobs that were both financially and emotionally rewarding. I knew of one person who had become a prominent venture capitalist; another friend had started a retail empire that expanded to five states; yet another was selling goods all over the world. There were some who had become investors running their own funds.And many of them had something in common: They tended to be the also-rans of the class, the ones who failed to get the jobs they wanted when they graduated. They had been passed over by McKinsey & Company and Google, Goldman Sachs and Apple, the big venture-capital firms and prestigious investment houses. Instead, they were forced to scramble for work — and thus to grapple, earlier in their careers, with the trade-offs that life inevitably demands. These late bloomers seemed to have learned the lessons about workplace meaning preached by people like Barry Schwartz. It wasn’t that their workplaces were enlightened or (as far as I could tell) that H.B.S. had taught them anything special. Rather, they had learned from their own setbacks. And often they wound up richer, more powerful and more content than everyone else.” NewYork Times 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

February 2019:

 

 

 

 

 

 

 

 

 

 

 

 

 

The Lab Discovering DNA in Old Books

Artifacts have genetic material hidden inside, which can help scientists understand the past.

 

 

 

 

 

“In recent years, archaeologists and historians have awakened to the potential of ancient DNA extracted from human bones and teeth. DNA evidence has enriched—and complicated—stories of prehistoric human migrations. It has provided tantalizing clues to epidemics such as the black death. It has identified the remains of King Richard III, found under a parking lot. But Collins isn’t just interested in human remains. He’s interested in the things these humans made; the animals they bred, slaughtered, and ate; and the economies they created.

That’s why he was studying DNA from the bones of livestock—and why his lab is now at the forefront of studying DNA from objects such as parchment, birch-bark tar, and beeswax. These objects can fill in gaps in the written record, revealing new aspects of historical production and trade. How much beeswax came from North Africa, for example? Or how did cattle plague make its way through Europe? With ample genetic data, you might reconstruct a more complete picture of life hundreds of years in the past.” Studying the DNA in artifacts is still a relatively new field, with many prospects that remain unexplored. But in our own modern world, we’ve already started to change the biological record, and future archaeologists will not find the same trove of hidden information in our petroleum-laden material culture. Collins pointed out that we no longer rely as much on natural materials to create the objects we need. What might have once been leather or wood or wool is now all plastic.” The Atlantic 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THE AI TEXT GENERATOR THAT’S TOO DANGEROUS TO MAKE PUBLIC

 

 

 

 

 

 

 

 

“IN 2015, CAR-AND-ROCKET man Elon Musk joined with influential startup backer Sam Altman to put artificial intelligence on a new, more open course. They cofounded a research institute called OpenAI to make new AI discoveries and give them away for the common good. Now, the institute’s researchers are sufficiently worried by something they built that they won’t release it to the public.

The AI system that gave its creators pause was designed to learn the patterns of language. It does that very well—scoring better on some reading-comprehension tests than any other automated system. But when OpenAI’s researchers configured the system to generate text, they began to think about their achievement differently.

“It looks pretty darn real,” says David Luan, vice president of engineering at OpenAI, of the text the system generates. He and his fellow researchers began to imagine how it might be used for unfriendly purposes. “It could be that someone who has malicious intent would be able to generate high-quality fake news,” Luan says.

That concern prompted OpenAI to publish a research paper on its results, but not release the full model or the 8 million web pages it used to train the system. Previously, the institute has often disseminated full code with its publications, including an earlier version of the language project from last summer.

OpenAI’s hesitation comes amid growing concern about the ethical implications of progress in AI, including from tech companies and lawmakers.

Google, too, has decided that it’s no longer appropriate to innocently publish new AI research findings and code. Last month, the search company disclosed in a policy paper on AI that it has put constraints on research software it has shared because of fears of misuse. The company recently joined Microsoft in adding language to its financial filings warning investors that its AI software could raise ethical concerns and harm the business.” Source: Wired

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Microsoft Aims to Connect Patient Health Records in the Cloud

 

 

 

 

 

Health-care push also includes new uses for chatbots linked to research trials and drug therapies, and team chat to better coordinate patient care.

“Microsoft Corp. is releasing a service to help health-care companies move vast amounts of patient data to its cloud and connect with other related systems in a bid to offer clinicians, individuals and researchers a more comprehensive view of patient health.    

 

The tool, based on Microsoft’s Azure cloud platform and a national standard for exchanging health records, will let disparate health systems talk to each other, for example hooking up patient records with pharmacy systems, fitness devices and others more seamlessly.

Health care lags behind some other industries in moving data to internet-based storage, and while health records have mostly gone digital, they are often stored in different databases that can’t share information easily. That makes it hard to create systems that use new artificial intelligence and data analysis techniques to track patient well-being and find new targeted therapies.  A better-connected health-care system would provide clinicians with more complete profiles of their patients, researchers with more data to study and individuals with more information to take control of their health, according to Microsoft.  It’s also an attempt to help Microsoft attract companies to Azure over market leader Amazon Web Services.

Microsoft will also continue to add new health-care tools to Azure, said vice president of Microsoft Healthcare, in an interview. “It’s hard to think of data standards for interoperability as a sexy topic,” he said, but it’s critical to a host of new healthcare applications. 

The software giant has been pushing into health care in fits and starts over the past several years. Recently it has been working on cloud and artificial-intelligence products to help reduce data-entry tasks for doctors, triage patients and provide more-targeted cancer care. Last month, Microsoft announced an Azure deal with Walgreens Boots Alliance Inc. The drugstore company said it will use Azure for services that connect patients’ health-care data with clinicians and pharmacists, among other things. 

To be successful in health care, Microsoft must train its software and artificial intelligence tools to be familiar with medical needs and terminology and must comply with a complex set of privacy requirements around healthcare data.  Microsoft will announce the new Azure service next week at the HIMSS healthcare conference in Orlando, Florida.  One example the company will show is using the service to create an app for scheduling hospital nurses. Microsoft also plans to announce about three dozen organizations that are already trying the new tool.” Bloomberg 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Why Girls Beat Boys at School and Lose to Them at the Office

 

 

 

 

 

 

 

“From elementary school through college, girls are more disciplined about their schoolwork than boys; they study harder and get better grades. Girls consistently outperform boys academically. And yet, men nonetheless hold a staggering 95 percent of the top positions in the largest public companies.

What if those same habits that propel girls to the top of their class — their hyper-conscientiousness about schoolwork — also hold them back in the work force?

When investigating what deters professional advancement for women, the journalists Katty Kay and Claire Shipman found that a shortage of competence is less likely to be an obstacle than a shortage of confidence. When it comes to work-related confidence, they found men are far ahead. “Underqualified and underprepared men don’t think twice about leaning in,” they wrote. “Overqualified and overprepared, too many women still hold back. Women feel confident only when they are perfect.”

As a psychologist who works with teenagers, I hear this concern often from the parents of many of my patients. They routinely remark that their sons do just enough to keep the adults off their backs, while their daughters relentlessly grind, determined to leave no room for error. The girls don’t stop until they’ve polished each assignment to a high shine and rewritten their notes with color-coded precision.

We need to ask: 

What if school is a confidence factory for our sons, but only a competence factory for our daughters?

This possibility hit me when I was caring for an eighth grader in my practice. She got terrific grades but was feeling overwhelmed by school. Her brother, a ninth grader, had similarly excellent grades, but when I asked if he worked as hard as she did, she scoffed. If she worked on an assignment for an hour and got an A, she felt “safe” only if she spent a full hour on other assignments like it. Her brother, in contrast, flew through his work. When he brought home an A, she said, he felt “like a stud.” If his grades slipped a bit, he would take his effort up just a notch. But she never felt “safe” enough to ever put in less than maximum effort.

That experience — of succeeding in school while exerting minimal or moderate effort — is a potentially crucial one. It may help our sons develop confidence, as they see how much they can accomplish simply by counting on their wits. For them, school serves as a test track, where they build their belief in their abilities and grow increasingly at ease relying on them. Our daughters, on the other hand, may miss the chance to gain confidence in their abilities if they always count on intellectual elbow grease alone.

First, parents and teachers can stop praising inefficient overwork, even if it results in good grades. Gendered approaches to learning set in early, so it’s never too soon to start working against them.

We can also encourage girls toward a different approach to school — one that’s more focused on economy of effort, rather than how many hours they put in. Whenever one of the academically impressive and persistently anxious girls in my practice tells me about staying up until 2 in the morning studying, I see an opening. That’s the moment to push them to become tactical, to figure out how to continue learning and getting the same grades while doing a little bit less. I urge my patients — and my own teenage daughter — to begin study sessions by taking sample tests, to see how much they know before figuring out how much more they need to do to attain mastery over a concept or task. Many girls build up an incredible capacity for work, but they need these moments to discover and take pride in how much they already understand.

Teachers, too, can challenge girls’ over-the-top tendencies. When a girl with a high-A average turns in extra credit work, her instructor might ask if she is truly taken with the subject or if she is looking to store up “insurance points,” as some girls call them. If it’s the former, more power to her. If it’s the latter, the teacher might encourage the student to trust that what she knows and the work she is already doing will almost certainly deliver the grade she wants. Educators can also point out to this student that she may not need insurance; she probably has a much better grasp of the material than she gives herself credit for.

Finally, we can affirm for girls that it is normal and healthy to feel some anxiety about school. Too often, girls are anxious even about being anxious, so they turn to excessive studying for comfort. We can remind them that being a little bit nervous about schoolwork just means that they care about it, which of course they should.

Even if neither you nor your daughter cares about becoming a chief executive, you may worry that she will eventually be crushed by the weight of her own academic habits. While a degree of stress promoting growth, working at top speed in every class at all times is unhealthy and unsustainable for even the most dedicated high school students. A colleague of mine likes to remind teenagers that in classes where any score above 90 counts as an A, the difference between a 91 and a 99 is a life.”

NY Times

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Germs in Your Gut Are Talking to Your Brain. Scientists Want to Know What They’re Saying.

 

 

 

 

 

The body’s microbial community may influence the brain and behavior, perhaps even playing a role in dementia and other disorders.

“Dr. Cryan and other scientists were beginning to find hints that these microbes could influence the brain and behavior. Perhaps, he told the scientific gathering, the microbiome has a role in the development of Alzheimer’s disease.

The idea was not well received. “I’ve never given a talk to so many people who didn’t believe what I was saying,” Dr. Cryan recalled.

A lot has changed since then: Research continues to turn up remarkable links between the microbiome and the brain. Scientists are finding evidence that microbiome may play a role not just in Alzheimer’s disease, but Parkinson’s disease, depression, schizophrenia, autism and other conditions.

For some neuroscientists, new studies have changed the way they think about the brain.

One of the skeptics at that Alzheimer’s meeting was Sangram Sisodia, a neurobiologist at the University of Chicago. He wasn’t swayed by Dr. Cryan’s talk, but later he decided to put the idea to a simple test.

“It was just on a lark,” said Dr. Sisodia. “We had no idea how it would turn out.”

He and his colleagues gave antibiotics to mice prone to develop a version of Alzheimer’s disease, in order to kill off much of the gut bacteria in the mice. Later, when the scientists inspected the animals’ brains, they found far fewer of the protein clumps linked to dementia.

Just a little disruption of the microbiome was enough to produce this effect. Young mice given antibiotics for a week had fewer clumps in their brains when they grew older, too. 

“I never imagined it would be such a striking result,” Dr. Sisodia said. “For someone with a background in molecular biology and neuroscience, this is like going into outer space.”

Following a string of similar experiments, he now suspects that just a few species in the gut — perhaps even one — influence the course of Alzheimer’s disease, perhaps by releasing chemical that alters how immune cells work in the brain.

It’s likely that this influence begins before birth, as a pregnant mother’s microbiome releases molecules that make their way into the fetal brain. 

Mothers seed their babies with microbes during childbirth and breast feeding. During the first few years of life, both the brain and the microbiome rapidly mature.

To understand the microbiome’s influence on the developing brain, Rebecca Knickmeyer, a neuroscientist at Michigan State University, is studying fMRI scans of infants.

In her first study, published in January, she focused on the amygdala, the emotion-processing region of the brain that Dr. Cryan and others have found to be altered in germ-free mice. 

Dr. Knickmeyer and her colleagues  measured the strength of the connections between the amygdala and other regions of the brain. Babies with a lower diversity of species in their guts have stronger connections, the researchers found.

Does that mean a low-diversity microbiome makes babies more fearful of others? It’s not possible to say yet — but Dr. Knickmeyer hopes to find out by running more studies on babies.”

NY Times

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Train the Brain to Form Good Habits Through Repetition

 

 

 

 

 

 

 

 

Rube Goldberg

“Researchers at Princeton and Brown Universities, have created a model which shows that forming good (and bad) habits depends more on how often you perform an action than on how much satisfaction you get from it. The new study is published in Psychological Review.

The researchers developed a computer simulation, in which digital rodents were given a choice of two levers, one of which was associated with the chance of getting a reward. The lever with the reward was the ‘correct’ one, and the lever without was the ‘wrong’ one.

The chance of getting a reward was swapped between the two levers, and the simulated rodents were trained to choose the ‘correct’ one.

When the digital rodents were trained for a short time, they managed to choose the new, ‘correct’ lever when the chance of reward was swapped. However, when they were trained extensively on one lever, the digital rats stuck to the ‘wrong’ lever stubbornly, even when it no longer had the chance for reward.

The rodents preferred to stick to the repeated action that they were used to, rather than have the chance for a reward.

Dr Ludvig, Associate Professor in the Department of Psychology and one of the paper’s authors, commented:

“Much of what we do is driven by habits, yet how habits are learned and formed is still somewhat mysterious. Our work sheds new light on this question by building a mathematical model of how simple repetition can lead to the types of habits we see in people and other creatures. ” Neuroscience Journal 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

January 2019:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

How to Declutter and Organize Your Personal Tech in a Few Simple Steps

 

 

 

 

 

 

 

Accessories and data may not take up much physical space, but they contribute to frustration and anxiety.

“Think about the digital junk we hoard, like the tens of thousands of photos bloating our smartphones or the backlog of files cluttering our computer drives, such as old work presentations, expense receipts and screenshots we have not opened in years.

In addition to the digital mess, tech hardware adds to the pile of junk that sparks no joy in our lives. Everyone has a drawer full of ancient cellphones, tangled-up wires and earphones that are never touched. And the things we do use every day, like charging cables strewn around the house, are an eyesore.

Why are people so terrible about tech hoarding?  Fortin, a professional organizer summed it up: “We don’t really think about the cost of holding on to things, but we think about the cost of needing it one day and not having it.”

Tidying up your digital media may not feel worthwhile because your files are not visible in the real world. Yet holding on to all the data takes up valuable space on devices while also making important files more difficult to find. The professionals recommended a process of purging and labeling what’s left. 

To streamline this process on a computer, open a folder and sort the files by when they were last opened. From there, you can immediately eliminate the files you have not opened in years.

On your smartphone, prune unnecessary apps that are taking up space. On iPhones, Apple offers the tool iPhone Storage which shows a list of apps that take up the most data and when they were last used; on Android devices, Google offers a similar tool called Files. From there you can home in on the data hogs and delete the apps you have not touched in months.” NY Times 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Is 5G technology harmful?

 

 

 

 

 

 

5G network and its safety issues are very controversial. Many health professionals, scientists and neurologists warn that 5G radically affects the development of life in unborn children, young adults health and sensitive adults. “5G actually represents a leap beyond 4G on the order of magnitude of a thousand: “With speeds of up to 100 gigabits per second, 5G is set to be as much as 1,000 times faster than 4G.” Microwaves and radio waves are very extreme in 5G.” Health professional expressing their concerns about 5G harmfulness to humans. Many of experts believe that 5G is by far the most dangerous technology ever approved. According to Senator Patrick Colebeck, many countries such as France and Switzerland have introduced many restrictions on Wifi use at the schools.

“More than 260 scientists from 42 countries have expressed their concerns over increasing exposure to EMF generated by electric and wireless devices before the additional 5G roll-out. According to these scientists numerous scientific publications have shown that EMF affects living organisms at levels above most international and national guidelines”: ” increased cancer risk, cellular stress, increase in harmful genetic damages, functional changes of the reproductive system, learning and memory deficits, neurological disorders, and negative impacts on general well-being in humans. Damage goes beyond the human race, as there is clear evidence of harmful effects to both plants and animals.” NY Journal 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Is Ancient DNA Research Revealing New Truths — or Falling Into Old Traps?

 

 

 

 

 

“Geneticists have begun using old bones to make sweeping claims about the distant past. But their revisions to the human story are making some scholars of prehistory uneasy.

David Reich’s lab is folded into a corner of a glassy, long-corridored labyrinth at Harvard Medical School. The only exterior advertisements of the nature of his research are large mounted maps of landforms all around the world. One afternoon last fall, as I stood and examined a continent, Reich materialized beside me. He is a long-limbed man with a lithe, almost balletic figure, and he wore a closefitting pullover and fading coral chinos. Though his hairline has receded and the curls behind his ears are graying, a boyish precocity makes him seem much younger than his 44 years. He led me swiftly past a confab of postdocs and into his windowed office. There was very little in the way of adornment, save a ghostly, truncated branch of the Indo-European language tree (“Greek,” “Armenian”) that someone had sketched out, on the wall over his desk, with what looked a permanent marker.

In his recent book, Reich ranks the “ancient-DNA revolution” with the invention of the microscope. Ancient DNA, his research suggests, can explain with more certainty and detail than any previous technique the course of human evolution, history and identity — as he puts it in the book’s title, “Who We Are and How We Got Here.” Though Reich works with samples that are thousands or tens of thousands of years old, the phrase “ancient DNA” encompasses any old genetic material that has been heavily degraded, and Reich’s work has been made possible only by a series of technological and procedural advances. Researchers in the field ship or hand-carry the bones to Harvard, where clean-suited technicians expose them to ultraviolet light to prevent contamination, then bore holes in them with dental drills. These skeletal remains are often rare — one pinkie-finger fragment that researchers in a lab in Leipzig used to demonstrate the existence of a long-extinct form of archaic humans was one of only four such bones ever found. Minuscule portions of genetic code are isolated and enriched, then read by expensive sequencers; statistical techniques then plot the relationship between this particular sample and thousands more in enormous data sets.

Reich inherited from his parents a humanistic bent: His mother, Tova, is a novelist of some renown; his father, Walter, is a psychiatrist who was the first director of the United States Holocaust Memorial Museum in Washington. He entered Harvard with an inclination toward social studies, but halfway through, in pursuit of greater rigor, he switched to physics; after graduation, he went to Oxford, where he studied biochemistry with the idea that he might go on to medical school. The impression he gives when talking about these years is one of restless intellectual ambition in search of a commensurate object. He eventually returned to Oxford to complete a doctorate, in zoology, where he at last found a sense of belonging in the lineage of Luca Cavalli-Sforza, a population geneticist who spearheaded efforts to make historical inquiry resemble a hard science.

After abandoning medical school at Harvard for a postdoc at M.I.T., Reich returned to Harvard to establish his own medical-genetics lab. His chief interest lay in the effort to design novel statistical approaches to better explain how populations were related to one another. He showed, for example, on the basis of contemporary genetic data, that modern Indians are in fact a product of two highly distinct groups, one that had been on the subcontinent for thousands of years and another that formed more recently.

He got his first opportunity to study ancient DNA when Svante Paabo — a Swedish geneticist who had worked with Wilson — enlisted Reich in his efforts, based out of a lab in Leipzig, to sequence the entirety of the Neanderthal genome. Reich’s analysis helped demonstrate that most living humans, with the general exception of sub-Saharan Africans, have some Neanderthal ancestry. “It was clear with the sequencing of the Neanderthal,” Reich told me in his office, “that this was obviously the best data in the world in any type of science.” It didn’t just tell you that Indians were a mixed group; it could, in theory, specify the moment where and when that mixture began.” NY Times

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Excessive Social Media Use Comparable to Drug Addiction

 

 

“Source: Michigan State University.

 

 

 

David Plunkert

 

Bad decision-making is a trait oftentimes associated with drug addicts and pathological gamblers, but what about people who excessively use social media? New research from Michigan State University shows a connection between social media use and impaired risky decision-making, which is commonly deficient in substance addiction.

“Around one-third of humans on the planet are using social media, and some of these people are displaying maladaptive, excessive use of these sites,” said Dar Meshi, lead author and assistant professor at MSU. “Our findings will hopefully motivate the field to take social media overuse seriously.”

The findings, published in the Journal of Behavior Addictions, are the first to examine the relationship between social media use and risky decision-making capabilities.

“Decision making is oftentimes compromised in individuals with substance use disorders. They sometimes fail to learn from their mistakes and continue down a path of negative outcomes,” Meshi said. “But no one previously looked at this behavior as it relates to excessive social media users, so we investigated this possible parallel between excessive social media users and substance abusers. While we didn’t test for the cause of poor decision-making, we tested for its correlation with problematic social media use.”

Meshi and his co-authors had 71 participants take a survey that measured their psychological dependence on Facebook, similar to addiction. Questions on the survey asked about users’ preoccupation with the platform, their feelings when unable to use it, attempts to quit and the impact that Facebook has had on their job or studies.

The researchers then had the participants do the Iowa Gambling Task, a common exercise used by psychologists to measure decision-making. To successfully complete the task, users identify outcome patterns in decks of cards to choose the best possible deck.

Meshi and his colleagues found that by the end of the gambling task, the worse people performed by choosing from bad decks, the more excessive their social media use. The better they did in the task, the less their social media use. This result is complementary to results with substance abusers. People who abuse opioids, cocaine, methamphetamine, among others – have similar outcomes on the Iowa Gambling Task, thus showing the same deficiency in decision-making.

“With so many people around the world using social media, it’s critical for us to understand its use,” Meshi said. “I believe that social media has tremendous benefits for individuals, but there’s also a dark side when people can’t pull themselves away. We need to better understand this drive so we can determine if excessive social media use should be considered an addiction.” Neuroscience Journal 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

HOW A MAJOR HACKING SPREE GETS PERSONAL FOR THE POLITICIANS

 

 

 

 

“IN AN EXTENSIVE series of tweets throughout December, hackers leaked sensitive data from hundreds of German politicians including members of the European parliament, German parliament, and regional state parliaments. The move reflects an insidious strategy criminals and hacktivists sometimes use to expose and endanger targets by leaking deeply person details about them and their families.

The leaks also impacted Chancellor Angela Merkel to a degree, as well as some journalists and performers. Though hackers posted the stolen information to a Twitter account over many days as a sort of digital advent calendar, the tweets gained attention on Thursday, and Germany’s Federal Office for Information Security scrambled to react on Friday as Twitter removed the account.

The trove of leaked documents is massive, but early assessments indicate that it seems focused less on exposing state secrets than it does on revealing deeply personal information about its target. The exposed data includes internal political communications, like emails and scans of faxes, along with credit card information, home addresses, phone numbers, personal identification card details, private chat logs, and even voicemails from relatives and children.

“There is no doubt that personal data leaks can be dangerous. It’s difficult to offer protection to the victims,” says Lukasz Olejnik, an independent cybersecurity adviser and research associate at the Center for Technology and Global Affairs at Oxford University. “So far I don’t see one particular target—it looks like it comes from many sources and platforms. It makes you wonder why the leaked data concerns a very broad political spectrum.”

Indeed, the trove seems to contain revelations about politicians from all of Germany’s major political parties except the far-right group Alternative for Germany.

Compounding the problem, the hackers also seem to have gone to great lengths to create not just landing pages with login credentials to host the materials, but also redundancies and mirror sites, making it difficult to scrub the data from the web. They set up dozens of duplicates of the leaked data, and hosted it on many different servers, making it harder for German officials and tech companies to potentially find all of the versions and remove them—especially since the content was live for weeks, and may have been downloaded and even reposted by a number of third parties.”

Wired

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

How Exercise Make Us Healthier

 

 

 

People who exercise have different proteins moving through their bloodstreams than those who are generally sedentary.

 

 

“People who exercise have different proteins moving through their bloodstreams than people who do not, according to an interesting new study of the inner landscapes of sedentary and active people. The proteins in question affect many different aspects of our bodies, from immune response and blood-sugar levels to wound healing, so the new findings may bring us closer to understanding just how exercise enhances our health at a deep, molecular level. By now, we can all agree, I hope, that being physically active is good for us. It raises fitness, reduces disease risks, lengthens life spans, improves heart health and, in multiple other discrete ways, makes us stronger and more well. But scientists have surprisingly little full knowledge of just how exercise accomplishes all of this. They can see or measure most of the desirable outcomes of being active. But many of the underlying, intricate physiological steps involved remain mysterious.” 

In the past several years, though, there has been growing scientific interest in delving into the various “’omics” of exercise. In broad terms, “’omics” refers to the identification and study of molecules related to different biological processes and how they work together. Genomics, for example, looks at molecules related to the operations of genes; metabolomics at those involved in our metabolisms, and so on.

But one of the more compelling ’omics fields is proteomics, because it focuses on proteins, which are expressed by genes and subsequently jump-start countless other physiological processes throughout our bodies.

Proteins are at the heart of our busy interior biology.

But almost nothing has been known about the proteomics of people who exercise and whether and how they might differ from those of people who rarely move and what it might mean if they do.

So, for the new study, which was published in November in the Journal of Applied Physiology, researchers at the University of Colorado, Boulder, set out to look at various people’s proteins.

They first gathered 31 healthy young men and women, about half of whom exercised regularly, while the rest did not. They also recruited an additional group of 16 healthy middle-aged and older men, half of whom were physically active and half of whom were sedentary.

They measured everyone’s aerobic fitness and markers of their health, including blood pressure and insulin control. Then they drew blood and sent it for proteomics analysis.

In this study, the analysis looked for the presence or absence of about 1,100 known proteins and also for complicated, teensy physiological indicators showing that certain proteins had or had not been expressed, or activated. at about the same time as one another or otherwise were interrelated.

The analysis found that, over all, about 800 of the proteins in the volunteers’ blood bore marks showing that they were interrelated.

The analysts grouped these proteins together based on how related they seemed to be. Ultimately they wound up with 10 different “modules” of proteins that they concluded were likely to be working in tandem with one another to perform various physiological tasks.

Each module contained anywhere from 14 to more than 500 related proteins, although the amounts of each protein within a module could vary from person to person.

Interestingly, the 800 proteins included many that already are known to be involved in health-related processes, such as starting or slowing inflammation and other immune-system responses.

Finally, the analysts checked to see whether the makeup of the 10 modules differed in people who were active.” NY Times

 

 

 

 

 

 

 

 

 

 

 

News 2018