Legos have been a beloved for decades as toys that teach constructive aesthetics and foster DIY creativity. Then the company started releasing Mindstorm kits to turn static models into moving robots with a little programming magic — but these were always aimed at older kids with some tinkering prowess. Algobrix, a brick-based system going live on Kickstarter today, aims to teach block-loving children the elements of coding without having to touch a computer.
Algobrix is a core set of function blocks labeled with symbols, not letters, so your early learners can figure them out before they’ve built up their vocabulary. Line them up the blocks railroad-style, hit “go” and your brick-built “Algobot” follows the instructions to move around, play audio or light up. There’s also light, movement and sound sensor blocks to add environmental triggers.
But the system’s great advantage is its compatibility with Lego blocks — and the latest version of Mindstorms — so kids can rig up all their existing sets and models. Algobrix is also a bit more affordable than Lego’s robotics kits, starting at $150 for the basic set’s early bird Kickstarter prices with an eventual sticker price of $300. All the reward levels include a slew of sample bricks, but Algobrix is primed to bring life to every piece of the Lego universe.
Source: Algobrix (Kickstarter)
So far, Motorola has spent its year churning out reliably good mid-range devices, not to mention a handful of new Moto Mods. Not bad, but now it’s finally flagship time. Motorola officially revealed the Moto Z2 Force in New York this morning, and — as expected — it’s a surprisingly slim smartphone you’ll have a hard time breaking. And here’s the best part: it’s not just a Verizon exclusive. Motorola says the Z2 Force will be available unlocked and on all five big US carriers starting on August 10.
On paper, the Z2 Force sounds mostly great. It packs a Snapdragon 835 chipset and 4GB of RAM, like a host of other flagships this year. (The rest of the world actually gets a better base model with 6GB of RAM.) Motorola and Lenovo also finally jumped on the dual-camera bandwagon by blending one 12-megapixel monochrome sensor with another 12-megapixel color sensor, much like Huawei did with its P10. Not all dual-camera setups are created equal, but so far, I’ve been pretty impressed by our test shots on this distinctly dreary New York day. Oh, and it runs Android 7.1.1 Nougat, complete with the usual slew of Moto Experiences — you know, like twisting the phone to launch the camera or waving your hand over the screen to see your notifications.
These new additions are still welcome, but the familiar Moto Z formula definitely hasn’t gone anywhere. The 5.5-inch AMOLED screen running at Quad HD hasn’t gotten any bigger or crisper since last year, but it’s thankfully just as durable thanks to Motorola’s ShatterShield design. Thanks to some extra protective lenses and an additional touch sensor layer over that screen, Motorola once again claims that random drops won’t destroy this things display. (We ran over the original Z Force with a car last year, so we’re inclined to agree.) And just like every other Moto Z we’ve ever seen, the Z2 Force packs support for Motorola’s magnetic mods, like projectors, game controllers, and yes, a new 360-degree camera.
Here’s the thing about the Z2 Force, though: it feels like a more powerful mashup of the Z and Z Force rather than a straight successor to the latter. Consider how thin this phone is — at 6.1mm, the Z2 Force is closer in size to the original Z. That might sound like a step in the right direction, and it definitely will be for some people. Unfortunately, the Z2 Force comes with a much, much smaller battery — think 2,730mAh versus 3,500mAh in last year’s model.
Motorola says the Z2 Force’s smaller battery will last all day, and that’s probably true. That doesn’t take the sting out of losing one of the Force line’s most valuable features: the ability to plow through around two days of work on a single charge. Motorola’s Turbo Chargers generally do a great job of powering up phones in a hurry, so hopefully this change won’t hurt too much. The event is still unfolding in front of us, so stay tuned for more info (and hands-on impressions) shortly.
You didn’t think Motorola was just going to announce a new phone, did you? The company also unveiled its latest Moto Mod — a 360-degree camera that magnetically attaches to Moto Z phones — here at an event in New York City. Details are still pretty sparse, but we do know some of the juiciest bits: it’ll record video at resolutions up to 4K, not to mention 3D sound for more immersive audio. (In other words, be sure to use headphones when watching the stuff you’ve recorded with this thing.) Oh, and it’ll set you back $300 when it launches alongside the Moto Z2 Force on August 10
This isn’t the first photographic Moto Mod ever — that would be the Hasselblad Zoom attachment — but this one seems tailor-made to capitalize on the rise of VR headsets. (Coincidentally, this thing looks a lot like immersive camera add-on for Andy Rubin’s Essential phone.) Two 13-megapixel sensors do all the heavy lifting here, and they’re capable of capturing 150-degree wide-angle shots when you’re only using one at a time.
As with other 360-degree cameras, like Samsung’s updated Gear 360, you’ll also be able to stream these live views directly to your social network of choice (though we’re still waiting for Motorola to release the full list of compatible services). Also like the Gear 360, you’ll do the lion’s share of the editing right on the phone, which isn’t a surprise since, you know, the camera is physically attached to it.
And really, that physical connection is the most fascinating thing about the 360 camera. Since Motorola decided to build this camera into a magnetic Mod instead of its own standalone body, it draws power straight from the phone’s battery — too bad the Moto Z2 Force has a smaller battery than we would’ve liked. Exactly how much power this thing drains while attached remains a mystery, though — we’ll ask the higher-ups just as soon as this press conference winds down.
NASA needs ideas on how to design a radiation shield for deep space vehicles, and it’s turning to origami artists and enthusiasts for help. If you’re skilled in the art, that includes you. The space agency has teamed up with crowdsourcing website Freelancer to launch a project challenging people to create an origami design showing the most efficient way to pack a radiation shield. It’s a necessary component for manned missions, since cosmic rays can damage both human bodies and electronics. Without a shield, future spacefarers will be more prone to developing cancer, degenerative tissue damage, nervous system damage, heart disease and cataracts.
NASA wants a shield big enough to cover both the spacecraft while it’s traveling and the living areas humans will establish on the moon, on Mars or any other celestial body. However, carrying something big enough to cover settlements could take up all the space in a vehicle. Imperial College London engineer Helen O’Brien told The Guardian that “NASA wants something that is sufficiently packed and compact so that when you actually land on a planet you can expand it and it will provide maximum efficiency and protection from radiation.”
The agency has been teaming up with Freelancer for its crowdsourcing projects for quite some time. Back in 2016, it asked people for help designing its cube robot’s arm, which is now in its second phase. It also asked people to submit 3D models of tools Robonaut 2 can use aboard the ISS, including an RFID scanner, a grapple hook and a handrail.
Source: ScienceAlert, The Guardian, Freelancer
Media companies are scrambling to figure out how exactly to best utilize the different social networks, and many have taken to producing original content for them. Bloomberg, for example, is building an exclusive 24-hour news channel for Twitter. ABC News is the latest to explore this phenomenon: It’s teamed with digital video company ATTN to produce original video for distribution through ABC News’ social media accounts.
These original videos will be news segments. They will feature interviews and investigative journalism by ABC anchors, specifically, those from Good Morning America, World News Tonight with David Muir and Nightline. The co-founder and editor-in-chief of ATTN, Matthew Segal, will also become an ABC News contributor.
This pivot to exclusive social media video isn’t surprising; after all, brands have been courting millennials with this type of content for awhile now. Recently, MGM announced that they would be producing short-form TV shows on Snapchat. Facebook is also pushing original video from “millennial-focused” companies, including the afore-mentioned ATTN. Presumably, this is a way to expand the ABC’s audience to reach younger viewers who don’t necessarily watch TV news.
From ship-hunting Tomahawk missiles and sub-spying drone ships to semi-autonomous UAV swarms and situationally-aware reconnaissance robots, the Pentagon has long sought to protect its human forces with the use of robotic weapons. But as these systems gain ever-greater degrees of intelligence and independence, their increasing autonomy has some critics worried that humans are ceding too much power to devices whose decision-making processes we don’t fully understand (and which we may not be entirely able to control).
What constitutes an Autonomous Weapon System (AWS) depends on who you ask, as these systems exhibit varying degrees of independence. Sense and React to Military Objects (SARMO) weapons like the Phalanx and C-RAM are able to react to incoming artillery and missile threats, targeting and engaging them without human oversight. However these aren’t fully-autonomous, per se — they simply perform a set automated task. They’re no more “intelligent” than the assembly line robots that welded your car’s frame together. There is no decision-making, only a response to an external stimulus.
Fully autonomous weapons capable of selecting, identifying and engaging with targets of their own choice without human input (think Terminators) have not yet been fielded by any nation, despite what Russia is claiming. However, a number of countries including China, the UK, Israel and, of course, the US and Russia are working on their direct precursors. As such, now is the time to devise a regulatory framework, the International Committee for Robot Arms Control (ICRAC) argued before a United Nations “Meeting of Experts” in 2014.
To a degree, both the Pentagon and the UK’s Ministry of Defense (MoD) have worked out internal guidelines for AWS development. In 2012, the Pentagon issued Directive 3000.09, which dictates that “autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” Similarly, Lord Astor of Hever told Parliament in 2013, “The MoD currently has no intention of developing systems that operate without human intervention.”
Directive 3000.09 further defines an AWS as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator.” This differs from a semi-autonomous weapon, which only identifies and presents targets for a human to select. Whether there is a human operator “in the loop” with the ability to override the AWS’ targeting decision or there is no human oversight at all (“out of the loop”) makes no difference to the AWS. So what does the DoD mean by “appropriate levels of human judgment”?
“We’ve debated with the US about this and they won’t say what they mean by ‘appropriate control’,” Noel Sharkey, professor of AI and robotics at the University of Sheffield and chair of the International Committee for Robot Arms Control, told Engadget. “In the last four years since [the ICRAC] have been campaigning at the UN, it’s always ‘Everything will be in controlled by humans,’ but nobody will say in what way it will be controlled by humans.”
Sharkey argues that the “appropriate control” argument leads to a slippery slope of responsibilities depending on how one defines “control.” In his book, Autonomous Weapons Systems: Law, Ethics, Policy, Sharkey lays out a five-point scale of what could constitute decreasing levels of human control:
The human identifies, selects and engages the target, initiating all attacks
The AI suggests target alternatives but the human still initiates the attack
The AI chooses the targets but the human must approve before the attack is initiated
The AI chooses the targets and initiates the attack but the human has veto power
The AI identifies, selects and engages the target without any human oversight
All of these schemes have their relative benefits, depending on the situation and the weapon system being controlled. In fact, the Patriot Missile System operates under engagement rules similar to Point 4. Weapons that would adhere to Point 5 have not yet reached the battlefield, though not due to lack of interest. UAVs like the BAE Taranis are being equipped with the ability to locate, identify and engage (but only with the OK from mission command), though it can also autonomously defend itself from incoming fire and enemy aircraft. And although Samsung stringently denies the allegations, its SGR-A1 defense turret, which is currently deployed along the Korean Demilitarized Zone, is rumored to be capable of identifying and engaging enemy forces completely on its own.
Despite Sharkey’s assertion that the Pentagon has been reticent to define “appropriate control,” a Department of Defense spokesman pointed out to Engadget that General Paul J. Selva, the vice chairman of the Joint Chiefs of Staff, stated during recent Senate testimony that he does not think “it’s reasonable to put robots in charge of whether we take a human life.” The DoD cites its decades-long use of the Aegis system as evidence of its responsible operation of autonomous weapons (or at least those with autonomous features).
“Context and environment matter in determining the appropriate level of human judgment to be exercised in the use of force,” the DoD spokesman said. “Like any other weapon, a given autonomous weapon system may be appropriate for use in one operational environment and purpose, but not another.”
Given the Pentagon’s established stance in favor of maintaining a human in the loop (at least in some form), even having an AWS ready for combat doesn’t guarantee that it would be used in future conflicts. The decision to use an AWS would balance on a number of factors including “trust in the system, training, level of risk associated with the situation,” the DoD spokesman said, and would be equally influenced by the operator’s “workload, stress and experience.”
That said, the Pentagon has not completely ruled out removing human oversight should our adversaries decide to do the same. “The DoD’s autonomy community is fully aware that the legal and ethical frameworks developed by the United States differ from both our current and potential adversaries,” a DoD spokesperson said. And should, say, China or Russia develop a devastating new AI-controlled AWS, the Pentagon will “have to come up with solutions where responses occur much faster than ‘human in the loop’ allows.”
What’s more, Just Security’s Paul Scharre argues that world militaries have good reason to maintain human control: the ability to re-target these multi-million-dollar munitions mid-fight should the situation on the ground change while the weapon is in transit.
Sharkey, however, is unimpressed. “Over the years working at the UN, it has become very clear to me that none of that is about killing less civilians,” he said. “The goal here is to not waste expensive munitions. So you want to be on target as much as possible… but if it’s a high-value target, as they say under the Principle of Proportionality, then it doesn’t matter about a few civilians or a hundred civilians if it’s bin Laden or whatever.” Indeed, the world recently witnessed the destruction wrought by precision munitions when used in sufficient quantities during the liberation (and devastation) of Mosul, Iraq.
The professor is also concerned with these weapons systems’ ability to be used for covert target identification and acquisition. “Connected to the cloud in order to work in tandem with other robots, they would be the perfect tools to ID and track large numbers of people from afar and from the air,” Sharkey argued in a 2015 Wall Street Journal op-ed. “The threat of future attacks would make these robots hard to put away again.” Especially if that technology should fall into the hands of extremist non-state actors.
So if we’re opening a Pandora’s box of Skynets, as Sharkey suggests, why not simply ban them outright, like we did with land mines and chemical weapons? As the DoD argued above, ban treaties are great and all but only when everybody adheres to them.
“The regulatory regimes that are specific to nuclear weapons or to chemical weapons cannot be readily applied to artificial intelligence or to weapons that employ AI or autonomy,” the DoD spokesman said. But unlike chemical or nuclear weapons, which require difficult-to-acquire materials to be deadly, autonomous weapons systems can be created using existing and often commercially available components.
So, in the end, it certainly appears that artificially intelligent weapons, like the atom bomb and V-2 rocket before them, will eventually make their way to future battlefields whether we like it or not. Just as with nuclear technology, AI’s potential for misuse does not guarantee such misuse will occur. It’s up to the world community to come together and decide, once and for all, what we want our collective future to look like. And whether or not we want it run by Terminators.
So it’s midnight, you have an irresistible case of the munchies and your favorite eateries have stopped delivering. Are you going to settle for a bag of chips or (gasp) hold out until the morning? You won’t have to, if Lyft has its way. It just introduced a Taco Mode that swings you past a Taco Bell drive-thru between 9PM and 2AM. That’s going to be a very, very expensive taco if that’s the sole reason for your trip, so you might want to get a Double Decker to justify the excursion. You can use Taco Mode while you’re headed somewhere else, though, so this could be helpful if your friends will kill you unless you show up with chalupas in hand.
Lyft isn’t even trying to hide the 420-friendly nature of Taco Mode — it’s launching the service in California’s Orange County, for goodness’ sake. This isn’t a wink-wink-nudge-nudge gimmick, though. The Taco Bell hookup will be available in other areas by the end of the year, and across the US by 2018. In other words: Lyft and Taco Bell fully expect to make money from your late-night indulgences.
That’s not surprising. For Lyft, this is a big marketing ploy. You’re more likely to ride with Lyft over Uber if you know you can make a run for the border on the way home from the bar. If the timing’s right, it could beat ordering from UberEats and waiting for a courier. And for Taco Bell, this can keep its drive-thrus humming when they might fall silent. Don’t be surprised if Lyft strikes more deals and makes sure it can satisfy all your cravings.
Source: Lyft Blog
Adobe announced today that it is ending support for and development of Flash in 2020. The company cited declining usage statistics (80 percent of Chrome users visited a site with Flash daily in 2014, as compared to 17 percent today) and a plethora of alternatives as the reason for the termination.
Many different companies, including Google, Facebook, Apple, Microsoft and Mozilla, have also released announcements about this decision. It’s worth noting that many of these companies already have ended Flash support. Apple was one of the early contributors to Flash’s demise when, back in 2007, Apple first refused to support Flash on iOS. Adobe got pretty uppity about the decision in 2010, when they accused Apple of denying iPhone and iPad users “the full range of web content.”
In the intervening years, though, Flash has become less and less crucial to the web experience. Last year Mozilla announced that Flash would not be included by default with its Firefox browser because of security issues. Microsoft’s Edge also began cracking down on Flash, while Google removed Flash-based advertising entirely from its ad network and opted for HTLM5, rather than Flash, in Chrome. Moves such as this foretold the death of Flash by a thousand cuts.
Security was the real problem for Flash; it was an IT person’s nightmare, with more gaping holes than a colander. It made tech headlines again and again for its many vulnerabilities. In the end, it’s why so many companies began to move away from using Flash.
Adobe plans on focusing its effort on developing new web standards and technology. HTML5 has been leading the way as Flash’s replacement for awhile now, and Adobe wants to make sure they’re still at the front, or at least not at the tail end, of the web development game.
In April, Italian marketplace chain Eataly announced it would sponsor the latest effort to preserve Leonardo da Vinci’s painting The Last Supper. It’s the perfect marketing partnership: A food company saves the most famous depiction of a meal for future generations. This excerpt is from its online announcement, titled “Eataly Saves The Last Supper“:
“We are partnering with the Italian government to sponsor a game-changing installation. Italy’s Ministry of Cultural Heritage and Activities and Tourism has designed an air-filtration system in collaboration with top Italian research institutes to filter in cool, clean air into the convent every day (10,000 cubic meters compared to today’s 3,500!).”
The state-of-the-art filtration system will be installed and active by 2019, in time to mark the 500th anniversary of da Vinci’s death.
This is the latest in a long line of attempts to stave off the inevitable. Nothing man-made is permanent; it will all be dust, eventually. But The Last Supper is a particularly tragic example of man’s impermanence. And the fight to save it has been laden with controversy, particularly in the modern era, as corporate sponsorship and claims to technology have muddied the waters of an already sensitive subject.
The work of preserving Leonardo da Vinci’s The Last Supper began soon after its completion.
The artist, ever the innovator, used an experimental technique to paint the moment that Christ told his Apostles that one of them would betray him. The scene would typically have been painted on wet plaster, but this fresco technique was fast-drying and would have required him to work quickly. Da Vinci wanted to work slowly over a number of years, and so he applied his pigments to a dry plaster wall. The result was a painting that did not adhere to its surface, owing to both the artist’s technique and Milan’s humidity.
The Last Supper began flaking a mere 20 years after da Vinci completed it. And after the Renaissance period passed, subsequent occupants of the church treated the painting with disregard. In the 1600s, occupants cut a door into the bottom of the painting, eliminating part of the table and Christ’s feet, which were composed to allude to the crucifixion. When Napoleon Bonaparte’s troops used the room as a stable in the late 1700s, they threw fragments of bricks at the painting’s faces. And during World War II, a bomb hit the church. The wall with the painting on it, however, survived intact.
Along with these indignities were multiple attempts to repair and preserve the painting — seven distinct attempts in all. These ranged from incompetent to serviceable, depending on the expertise of the artist and the technology available at the time. In 1770, for example, Giuseppe Mazza repainted all the faces with the exception of three before he was stopped; the prior who assigned him to restore the painting was sent to another convent. Another restorer, Stefano Barezzi, attempted to remove the entire painting from the wall and transfer it onto canvas in 1821, permanently damaging the work in the process.
In 1947, restorer Mauro Pellicioli began a multiyear process to reinforce the painting and return it to something resembling its original state. He adhered the flaking paint to the wall with a colorless shellac resin. He then began scraping away earlier restoration work, leaving only the areas covering empty spots, where Leonardo’s work was lost or unsalvageable.
“The shellac was applied to entomb what survived on the wall, to keep it in place into perpetuity,” Michael Daley, director of ArtWatch UK, told Engadget.
This was meant to be a final, definitive restoration. And yet, mere decades later, another, “better” restoration of the painting would be attempted.
“There is a great prejudice for the new in restoration, which is problematic,” said Daley. “It’s always dangerous in life to rush into new technologies and be the first.
“In aeronautics, researchers have to get things right,” Daley continued. “They can’t afford for airliners to go down. But in art restoration, for all the great cultural importance of the works, the people who administer the restorations are not disciplined manufacturers; their activities grew out of craft traditions. And they can be amazingly casual.”
Daley and art historian James Beck founded ArtWatch in 1991 as a watchdog organization — a vocal, if small, contingency of artists and art historians that documents and protests what it views as irresponsible art restorations. Prior to its criticisms of The Last Supper‘s restoration, the organization’s notoriety came from its criticisms of the Sistine Chapel renovation. They share common themes.
Taking place from 1980 to 1994, the Sistine Chapel restoration was necessitated, according to the Vatican, by soot and grime, which had built up over centuries by burning candles. In addition, there were cracks in the ceiling that would have ultimately affected the integrity of the work.
When the ceiling was reintroduced to the public in 1994, the majority of observers praised the restoration. The physical ceiling was repaired and reinforced to prevent future damage. And the frescoes were now filled with bright, previously unseen colors.
But perhaps they were now too bright — brighter than they had ever been during Michelangelo’s lifetime. The cleaners thought that Michelangelo had taken a uniform approach to painting the ceiling, that he had always worked buon fresco, adding illustrative details, like shadows and shading, while the plaster was still wet. So the cleaners assumed that anything not buon fresco was either grime or was added in by another artist, and they removed it.
It became apparent, however, that Michelangelo had painted much of the shadows and detailing on the ceiling al secco, after the plaster dried. And thus, all of these original details were destroyed during the cleaning. Now the figure in the Jesse spandrel is missing his eyes.
And the Jonah figure, which serves as a focal point of the entire work, is dramatically diminished by the lack of shadow and definition. Many figures in the painting now have a sort of washed out, rough appearance.
How could this have happened, no less to one of the most significant cultural works of the Western canon? Beck attributed it to the restorers’ arrogance, that their confidence in knowing the artist’s intentions and method led to the painting’s ruin. The restorers began their work in relative isolation without consulting an outside, independent committee of art historians, artists or scientists. Beck pushed for this committee in 1987, when the ceiling was already half completed; one did convene later that same year, however, and filed a complimentary report.
“The new freshness of the colors and clarity of the forms on the Sistine ceiling are totally in keeping with 16th-Century Italian painting and affirm the full majesty and splendor of Michelangelo’s creation,” the report refuted.
Thus, critics were not able to halt what they saw as destruction of Michelangelo’s work. But they were able provoke a response. To date, ArtWatch has still not been widely successful in preventing these sorts of restorations. But ArtWatch has seen success in changing the conversation surrounding restoration, from one that is overly laudatory to one that is initially suspicious.
It was difficult for interested parties to track the ongoing work — the result of the restoration’s sponsorship by Nippon Television Network Corporation. The network paid $4.2 million dollars to the project in exchange for exclusive rights to document and photograph the restoration. It seems, on its face, like a conflict of interest, to entrust the proper documentation of the restoration to a company that stood to benefit the most from a positive portrayal. A positive review of the restoration by The New York Times in 1990 noted that Nippon had made the restoration photos prohibitively expensive and had not yet provided the requisite before-and-after photographs that would address critics’ concerns.
Thus, ArtWatch’s dual, primary concerns are an overzealous approach to restoration using new techniques, which risks damages that cannot be reverted, and the influence of corporate money, which calls the ethics and motives of the restoration into question.
In 1978, Pinin Brambilla Barcilon began work on what was, once again, promoted as the final, authoritative restoration of The Last Supper. This time, rather than attempting to retain the work of prior restoration attempts, her goal was to strip it all away — the shellac from the prior restoration included — and get down to da Vinci’s handiwork alone.
Only approximately 20 percent of the original painting had survived.
Barcilon viewed the painting through a microscope and then slowly, gently blotted the painting with solvent, using small bits of Japanese mulberry cloth.
The end result was a sad revelation: Only approximately 20 percent of the original painting had survived. There was, however, a trade-off: Barcilon uncovered many details, buried underneath the prior restorations, that were once thought lost. An orange here. A hand’s definition there. A face with different dimensions and expression.
The missing parts of the painting, meanwhile, were filled in with beige, to give an indication of what the painting once looked like. In Barcilon’s own words, from her book Leonardo: The Last Supper, which documented her entire process:
“Where the pictorial film was missing, the initial procedure was to reintegrate the image based on neutral tonal reduction (neutro), intended to create an ideal background of homogeneous color for the original fragments. In the interest of achieving greater legibility and unity, a method of reintegration that approximated the surrounding color was adopted later. Executed in watercolor, the reintegration was particularly laborious and delicate because the surface absorbed the color unevenly, requiring repeated applications and a gradual buildup of tonal intensity.”
The book explains and rationalizes the before and after for every part of the painting; it is exhaustive in its attention to detail. To illustrate, here is an excerpt of the before state of Thaddeus (the middle apostle in the above photograph):
“Before the latest restoration, the figure generally appeared dark, and the facial features were undefined. A dark diagonal stroke roughly designated the eyes. The hair blended in with the gray field of the wall behind the figure, and the almost indistinct volume of the hair was flattened in an abbreviated mass.”
And here is an excerpt of the after state of Thaddeus:
“Numerous and unexpected recoveries of original material emerged from beneath the blanket of repaint, especially in the face and the hair. The hair reacquired its initial volume, modulated by wavy locks threaded with subtle highlights and delicate white brushstrokes, while a soft gray glaze skillfully emphasized the hairline. The facial features proved leaner and purer, even on a chromatic level.”
Clearly, this restoration was a labor of love and devotion; one could hardly spend 20 years nose to nose with something without it being so. Critics, however, took issue with several points of this restoration, the first being its necessity. If so little of da Vinci’s original work remained, reasoned critics, then why strip away the record of prior restorations, which had become a part of the painting’s history? What was the sense of having something that was more Barcilon’s watercolors than da Vinci? Shouldn’t the more historically noteworthy restorations have remained instead?
“Restorers have a professional imperative, rather than to reflect upon what they’re seeing, to do something about it,” said Daley.
One might speculate that the corporate sponsorship, this time of tech manufacturer Olivetti, may have also increased pressure to act — to be proactive for the amount of money that was being spent rather than do nothing at all.
Science captured the imagination of Western civilization in the wake of World War II, which was won, in part, due to the superior technology of the Allies.
“In the second half of the 20th century, restorers, who themselves were technically and scientifically naive, became enthralled and envious of the authority of scientific people and disciplines.” said Daley. “Science is used to make their activities more respectable than they might be. Restorers try to maintain the scientific aura of impregnability.”
Science, in other words, can be used as a cudgel and conversation-ender to give cover to untested techniques that may not, in actuality, be all that scientific. And because there are still ongoing, internal debates over major aspects of restoration — such as whether solvents or soaps are even safe to use — one might conclude that a restoration is often not worth the risk. A risk, Daley claims, was taken with The Last Supper.
“The restorers painted the bare wall in these watercolors, and it’s made the painting into a kind of modern decoration,” lamented Daley.
ArtWatch is not alone in its criticisms.
“[The restorers] decided to proceed without even conducting the proper analyses to determine how much of the original painting remained,” said Mirella Simonetti, a Bologna-based restorer, in a 1995 interview with The New York Times. “And now they show these remaining crumbs, these plates and glasses, and say it is Leonardo.”
In a final cruel irony, Tom Hundley, who wrote about the restoration for the Chicago Tribune in 1999, made the following cautionary point: The painting, no longer protected by the shellac or any overpainting, was now more vulnerable to the elements and environment than ever before.
That brings this long, winding journey up to the recent announcement of Eataly’s new air-filtration system.
“Works of art currently housed in the Cenacolo Vinciano Museum have undergone material deterioration caused by factors such as indoor air pollution, biological contamination, mass tourism, and variability in microclimate conditions,” Sara Massarotto, Public Relations & Social Media Manager for Eataly USA, told Engadget. “L’Ultima Cena is no exception, and has suffered greatly from these environmental conditions through the centuries.
“In particular, the humidity emitted by the wall da Vinci used as his experimental canvas has put significant pressure on the painting, especially if the air and temperature are not carefully filtered and monitored,” continued Massarotto. “This is why restrictions to protect the fragile nature of the space — such as limiting visits to only 1,300 individuals per day — are in place.”
ArtWatch’s primary concern is neither for the system itself nor its necessity; both have been verified by a wide range of academic sources outside the museum, from the Polytechnic University of Milan to the University of Milan-Bicocca.
Rather, the problem is whether these changes have been necessitated by current, irresponsible practice and if these changes will enable rather than discourage similar practice in the future.
“The question that is often ignored is: ‘Why are serious systems of air filtration necessary?’” said Ruth Osborne, director of ArtWatch International, Inc., in an interview with Engadget. “For something like The Last Supper, air filtration had to be recommended due to previous restoration efforts that have botched the works. The continuous restoration is making the work fragile. There is also added pollution due to the increasing influx of visitors allowed into the space than is healthy for the stability of the art.
“An air filtration system is not to be seen as a cure-all for works of art in danger.” continued Osborne. “It seems the maximum of 25 visitors to Leonardo’s Last Supper is still in place, though the question remains: Will those in charge decide this is reason enough to enable more to fill the space?”
Additional visitors equate to additional contamination, and this could damage the work further. Art does little good in a climate-controlled storage unit, where it cannot be seen by the public. But what happens when art, paradoxically, is too fragile to be seen? And regardless, all measures to preserve and maintain the work, whether or not it is on display, cost money. Corporations, vested interests or not, are often a necessary concession.
It’s a circular catch-22 that will continue unabated, because art continues to age, and today, restoration is an established industry. One can study it at a university and earn a degree in it. There are entire conservation departments at art museums dedicated to its study and implementation. It’s a matter of approach, however: to take radical measures under the cover of scientific inquiry or to err on the side of inaction, in fear of damaging something beyond repair.
In the case of The Last Supper, centuries of irreversible damage have already been done. Damage compounds damage. Mistakes pile up. And additional damage, Daley supposes, will be done in the future.
“I haven’t the slightest doubt that within the next 20 years, someone else might look at The Last Supper and say, ‘That’s not satisfactory.’” said Daley. “There will be another excuse. You can’t stop people wanting to be the person who saved the greatest art in the world.”
Image credits: Unknown / Wikimedia (Milan bombing); Ministry of Cultural Heritage and Activities and Tourism / ArtWatch UK (The Last Supper / Sistine Chapel ceiling).
Forget the piles of peer-reviewed research accumulated by scientists over decades, climate change is actually great news for mankind. Or so says Texas Rep. Lamar Smith, the Republican head of the House Committee on Science, Space and Technology Committee. In a baffling editorial titled “Don’t Believe the Hysteria Over Carbon Dioxide”, Smith complains that Americans are being brainwashed by “alarmists’ claims” (read: scientific consensus) and urges readers to consider the many perks of atmospheric armageddon.
For example, churning carbon dioxide into the environment is no biggie because plants love the stuff. “A higher concentration of carbon dioxide in our atmosphere would aid photosynthesis, which in turn contributes to increased plant growth. This correlates to a greater volume of food production and better quality food,” Smith writes, referencing uncited “studies” while ignoring reams of research that show any benefit would be canceled out by other climate factors, such as drought and temperature increases.
Rising sea levels are also something we hear a lot about in the alarmist science community. As the planet gets warmer, ice in the northern hemisphere will melt, contributing to sea level rises that will cause catastrophic floods in coastal cities. But these are “beneficial changes”, Smith says, which could result in “commercial shipping lanes that provide faster, more convenient, and less costly routes between ports in Asia, Europe, and eastern North America.”
Of course, it’s no coincidence that Smith’s biggest campaign contributor has been the oil and gas industry, pouring over $705,000 into furthering his career, which just to reiterate, now sees him running the Committee on Science, Space and Technology. Republicans have long-denied climate change was happening, but now that’s become completely untenable they have to comment on it in some way — in this case, by putting a happy spin on what may well herald the end of humanity.
Source: The Daily Signal