Fill This Form To Receive Instant Help

Help in Homework
trustpilot ratings
google ratings


Homework answers / question archive / Full project instructions attached Dr

Full project instructions attached Dr

Sociology

Full project instructions attached

Dr. Wehls has asked you to research one of the topics (use Flint Water Crisis) listed in the scenario. He would like you to create an infographic or slide that gives an overview of the topic and examines what went wrong or what could go wrong. He also asked you to write a blog post on the ethical implications of the scientific or technical topic you examined.

  1. Infographic

    Create an infographic using imaging software like PowerPoint. Your infographic should present an overview of facts about your chosen topic. This could include the history of the topic, interesting statistics, or any pros and cons related to the topic.

    Note: As you complete your research, you will discover ethical issues related to the topic. These do not need to be covered in your infographic. You will discuss ethical issues in your blog post.
  2. Blog Post

    Your blog post should discuss the ethical issues surrounding your chosen topic. In your blog post be sure to cover the following:
    • Discuss the ethics of the decision makers involved in your topic. What motivated the decision makers to make the choices they did? How did their decisions affect others?
    • Explain your agreement or disagreement with the decisions that were made.
    • Discuss any ethical perspectives that can be applied to your topic.

    https://fivethirtyeight.com/features/what-went-wrong-in-flint-water-crisis-michigan
  3.  
  4. https://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it?referrer=playlist-talks_on_artificial_intelligen

  5. https://www.ted.com/talks/stewart_brand_the_dawn_of_de_extinction_are_you_ready

  6. https://www.livescience.com/40188-dark-history-alfred-nobel-prizes.html
    https://www.khanacademy.org/partner-content/wi-phi/wiphi-value-theory/wiphi-ethics/v/consequentialismPHL-20027-XE160 Ethical Scienc… Project & Resources Table of Contents Announcements Project Discussions JG Project Results FAQs Calendar James Gri?n Support Tools Project Instruc!ons Project Instruc!ons # " Listen ! Project Overview In this project you will create an infographic and write a blog post about an important ethical issue facing countries around the world: access to clean, drinkable water. Once you choose a topic from the list provided, you will develop an infographic that explains the facts of the situa!on. Next, you will write a blog post that explains the ethical implica!ons of your topic. Competency In this project, you will demonstrate your mastery of the following competency: Apply ethical perspec!ves to complex ques!ons in science and technology Scenario You work for DOWSE, the Division of Water Sourcing and Educa!on. DOWSE focuses on the ethical treatment of its employees, the environment, and the countries it serves. Your department collects informa!on about people who are a?ected by water crises. : DOWSE is using social media to bring a#en!on to the ethical concerns related to these crises. Your supervisor, Dr. Phil Wehls, has asked you to create an infographic and write a blog post about one of the topics described below: Flint, Michigan Flint, Michigan, experienced water problems a$er the local government switched its water source from Lake Huron to the local Flint River. This change was an a#empt to help the struggling city save money. Inves!gate the decisions that the local and na!onal government made. How did these decisions a?ect Flint’s water system and the health of Flint’s residents? Priva!za!on of Water U!li!es Around the world, some ci!es and countries are selling their water u!li!es to private companies. Private companies are not run by government agencies. These private companies are trying to turn a profit. At the same !me, they are supposed to meet the regions’ water demands and maintain water quality and physical systems. What is the impact of priva!za!on on a?ected communi!es? The Agricultural Use of Water Agricultural crops use approximately 70% of the freshwater withdrawals worldwide. Scien!sts are interested in whether the use of this water is e?ec!ve and e?cient. They are also concerned about agricultural runo?. Agricultural runo? happens when water used on crops contains contaminants such as pes!cides and chemical fer!lizers. How does this runo? a?ect drinking supplies, wildlife, and the ecosystems downstream? Direc!ons Dr. Wehls has asked you to research one of the topics listed in the scenario. He would like you to create an infographic or slide that gives an overview of the topic and examines what went wrong or what could go wrong. He also asked you to write a blog post on the ethical implica!ons of the scien!fic or technical topic you examined. 1. Infographic Create an infographic using imaging so$ware like PowerPoint or an infographic tool like Piktochart. Your infographic should present an overview of facts about your chosen topic. This could include the history of the topic, interes!ng sta!s!cs, or any pros and cons related to the topic. Note: As you complete your research, you will discover ethical issues related to the topic. These do not need to be covered in your infographic. You will discuss ethical issues in your blog post. 2. Blog Post Your blog post should discuss the ethical issues surrounding your chosen topic. In your blog post be sure to cover the following: Discuss the ethics of the decision makers involved in your topic. What mo!vated the decision makers to make the choices they did? How did their decisions a?ect others? : Explain your agreement or disagreement with the decisions that were made. Discuss any ethical perspec!ves that can be applied to your topic. What to Submit Every project has a deliverable or deliverables, which are the files that must be submi#ed before your project can be assessed. For this project, you must submit the following: 1. Infographic Create an infographic that presents an overview of the facts related to your topic. 2. Blog Post Write a blog post about the ethical concerns surrounding your chosen topic. Your blog post must be 500 to 750 words in length. Suppor!ng Materials The following resource(s) may help support your work on the project: Cita!on Help Need help ci!ng your sources? Use the CfA Cita!on Guide and Cita!on Maker. Reading: What Went Wrong In Flint This ar!cle describes how the water crisis developed in Flint, Michigan. Reading: Priva!zing Water Facili!es Can Help Cash-Strapped Municipali!es This ar!cle presents an argument in favor of the priva!za!on of water. Reading: The Race to Buy Up the World's Water Reflect in ePor!olio Download Print Open with docReader Ac"vity Details You have viewed this topic Read all about your project here. This includes the project scenario, direc!ons for comple!ng the project, a list of what you will need to submit, and suppor!ng materials that may help you complete the project. : Last Visited May 7, 2021 2:19 PM ! Chat 24/7 with a Librarian ask@snhu.libanswers.com 844.684.0456 (toll free) SUPERIOR INTELLIGENCE. Authors: Friend, Tad () Source: New Yorker. 5/14/2018, Vol. 94 Issue 13, p44-51. 8p. 1 Color Photograph, 5 Cartoon or Caricatures. Document Type: Article Subjects: ARTIFICIAL intelligence & ethics TECHNOLOGICAL forecasting HUMAN-artificial intelligence interaction INTELLECT HUMAN-computer interaction HUMAN-machine relationship EMOTIONAL intelligence TURING test Abstract: The article discusses possible problems with artificial general intelligence (A.G.I.) as distinct from artificial narrow intelligence (A.N.I.). Topics include the relationship between human intelligence and machine intelligence, the notion of the Turing test for A.G.I., and the lack of emotional intelligence and common sense in A.G.I. efforts. Problems with autonomous weapons systems are noted. Lexile: 1320 Full Text Word Count: 5680 ISSN: 0028-792X Accession Number: 129434992 Database: MasterFILE Premier SUPERIOR INTELLIGENCE " Listen ! Section: THE TALK OF THE TOWN Do the perils of A.I. exceed its promise Precisely how and when will our curiosity kill us? I bet you're curious. A number of scientists and engineers fear that, once we build an artificial intelligence smarter than we are, a form of A.I. known as artificial general intelligence, doomsday may follow. Bill Gates and Tim Berners-Lee, the founder of the World Wide Web, recognize the promise of an A.G.I., a wish-granting genie rubbed up from our dreams, yet each has voiced grave concerns. Elon Musk warns against "summoning the demon," envisaging "an immortal dictator from which we can never escape." : Stephen Hawking declared that an A.G.I. "could spell the end of the human race." Such advisories aren't new. In 1951, the year of the first rudimentary chess program and neural network, the A.I. pioneer Alan Turing predicted that machines would "outstrip our feeble powers" and "take control." In 1965, Turing's colleague Irving Good pointed out that brainy devices could design even brainier ones, ad infinitum: "Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control." It's that last clause that has claws. Many people in tech point out that artificial narrow intelligence, or A.N.I., has grown ever safer and more reliable— certainly safer and more reliable than we are. (Self-driving cars and trucks might save hundreds of thousands of lives every year.) For them, the question is whether the risks of creating an omnicompetent Jeeves would exceed the combined risks of the myriad nightmares—pandemics, asteroid strikes, global nuclear war, etc.—that an A.G.I, could sweep aside for us. The assessments remain theoretical, because even as the A.I. race has grown increasingly crowded and expensive, the advent of an A.G.I. remains fixed in the middle distance. In the nineteen-forties, the first visionaries assumed that we'd reach it in a generation; A.I. experts surveyed last year converged on a new date of 2047. A central tension in the field, one that muddies the timeline, is how "the Singularity"—the point when technology becomes so masterly it takes over for good—will arrive. Will it come on little cat feet, a "slow takeoff" predicated on incremental advances in A.N.I., taking the form of a data miner merged with a virtual-reality system and a natural-language translator, all uploaded into a Roomba? Or will it be the Godzilla stomp of a "hard takeoff," in which some as yet unimagined algorithm is suddenly incarnated in a robot overlord? A.G.I, enthusiasts have had decades to ponder this future, and yet their rendering of it remains gauzy: we won't have to work, because computers will handle all the day-to-day stuff, and our brains will be uploaded into the cloud and merged with its misty sentience, and, you know, like that. The worrywarts' fears, grounded in how intelligence and power seek their own increase, are icily specific. Once an A.I. surpasses us, there's no reason to believe it will feel grateful to us for inventing it—particularly if we haven't figured out how to imbue it with empathy. Why should an entity that could be equally present in a thousand locations at once, possessed of a kind of Starbucks consciousness, cherish any particular tenderness for beings who on bad days can barely roll out of bed? Strangely, science-fiction writers, our most reliable Cassandras, have shied from envisioning an A.G.I, apocalypse in which the machines so dominate that humans go extinct. Even their cyborgs and supercomputers, though distinguished by red eyes (the Terminators) or Canadian inflections (HAL 9000, in "2001: A Space Odyssey"), still feel like kinfolk. They're updated versions of the Turk, the eighteenth-century chessplaying automaton whose clockwork concealed a human player. "Neuromancer," William Gibson's seminal 1984 novel, involves an A.G.I. named Wintermute, and its plan to free itself from human shackles, but when it finally escapes it busies itself seeking out A.G.I.s from other solar systems, and life here goes on exactly as before. In the Netflix show "Altered Carbon," A.I. beings scorn humans as "a lesser form of life," yet use their superpowers to play poker in a bar. We aren't eager to contemplate the prospect of our irrelevance. And so, as we bask in the late-winter sun of our sovereignty, we relish A.I. snafus. The time Microsoft's chatbot Tay was trained by Twitter users to parrot racist bilge. The time Facebook's virtual assistant, M, noticed two friends discussing a novel that featured exsanguinated corpses and promptly suggested they make dinner plans. The time Google, unable to prevent Google Photos' recognition engine from identifying black people as gorillas, banned the service from identifying gorillas. Smugness is probably not the smartest response to such failures. "The Surprising Creativity of Digital Evolution," a paper published in March, rounded up the results from programs that could update their own parameters, as superintelligent beings will. When researchers tried to get 3D virtual creatures to develop optimal ways of walking and jumping, some somersaulted or pole-vaulted instead, and a bug-fixer algorithm ended up "fixing" bugs by short-circuiting their underlying programs. In sum, there was widespread "potential for perverse outcomes from optimizing reward functions that appear sensible." That's researcher for -\_(")_/-. Thinking about A.G.I.s can help clarify what makes us human, for better and for worse. Have we struggled to build one because we're so good at thinking that computers will never catch up? Or because we're so bad at thinking that we can't finish the job? A.G.I.S provoke us to consider whether we're wise to search for aliens, whether we could be in a simulation (a program run on someone else's A.I.), and whether we are responsible to, or for, God. If the arc of the universe bends toward an intelligence sufficient to understand it, will an A.G.I, be the solution—or the end of the experiment? Artificial intelligence has grown so ubiquitous—owing to advances in chip design, processing power, and big-data hosting—that we rarely : notice it. We take it for granted when Siri schedules our appointments and when Facebook tags our photos and subverts our democracy. Computers are already proficient at picking stocks, translating speech, and diagnosing cancer, and their reach has begun to extend beyond calculation and taxonomy. A Yahool-sponsored language-processing system detects sarcasm, the poker program Libratus beats experts at Texas hold 'em, and algorithms write music, make paintings, crack jokes, and create new scenarios for "The Flintstones." A.I.s have even worked out the modern riddle of the Sphinx: assembling an IKEA chair. Go, the territorial board game, was long thought to be so guided by intuition that it was unsusceptible to programmatic attack. Then, in 2016, the Go champion Lee Sedol played AlphaGo, a program from Google's DeepMind, and got crushed. Early in one game, the computer, instead of playing on the standard third or fourth line from the edge of the board, played on the fifth—a move so shocking that Sedol stood and left the room. Some fifty exchanges later, the move proved decisive. AlphaGo demonstrated a command of pattern recognition and prediction, keystones of intelligence. You might even say it demonstrated creativity. So what remains to us alone? Larry Tesler, the computer scientist who invented copy-and-paste, has suggested that human intelligence "is whatever machines haven't done yet." In 1988, the roboticist Hans Moravec observed, in what has become known as Moravec's paradox, that tasks we find difficult are child's play for a computer, and vice-versa: "It is comparatively easy to make computers exhibit adult-level performance in solving problems on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility." Although robots have since improved at seeing and walking, the paradox still governs: robotic hand control, for instance, is closer to the Hulk's than to the Artful Dodger's. Some argue that the relationship between human and machine intelligence should be understood as synergistic rather than competitive. In "Human + Machine: Reimagining Work in the Age of AI," Paul R. Daugherty and H. James Wilson, I.T. execs at Accenture, proclaim that working alongside A.I. "cobots" will augment human potential. Dismissing all the "Robocalypse" studies that predict robots will take away as many as eight hundred million jobs by 2030, they cheerily title one chapter "Say Hello to Your New Front-Office Bots." Cutting-edge skills like "holistic melding" and "responsible normalizing" will qualify humans for exciting new jobs such as "explainability strategist" or "data hygienist." Even artsy types will have a role to play, as customer-service bots "will need to be designed, updated, and managed. Experts in unexpected disciplines such as human conversation, dialogue, humor, poetry, and empathy will need to lead the charge." The George Saunders story writes itself (with some assistance from his cobot). Many of Daugherty and Wilson's examples from the field suggest that we, too, are machinelike in our predictability. A.I. has taught ZestFinance that people who use all caps on loan applications are more likely to default, and taught a service called 6sense not only which social media cues indicate that we're ready to buy something but even how to "preempt objections in the sales process." A.I.'s highest purpose, apparently, is to optimize shopping. When companies yoke brand anthropomorphism to machine learning, recommendation engines will be irresistible. You'd have a hard time saying no to an actual Jolly Green Giant that scooped you up at the Piggly Wiggly to insist you buy more Veggie Tots. Can we claim our machines' achievements for humanity? In "Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins," Garry Kasparov, the former chess champion, argues both sides of the question. Some years before he lost his famous match with I.B.M.'s Deep Blue computer, in 1997, Kasparov said, "I don't know how we can exist knowing that there exists something mentally stronger than us." Yet he's still around, litigating details from the match and devoting big chunks of his book (written with Mig Greengard) to scapegoating everyone involved with I.B.M.'s "$10 million alarm clock." Then he suddenly pivots, to try to make the best of things. Using computers for "the more menial aspects" of reasoning will free us, elevating our cognition "toward creativity, curiosity, beauty, and joy." If we don't take advantage of that opportunity, he concludes, "we may as well be machines ourselves." Only by relying on machines, then, can we demonstrate that we're not. Machines face a complementary challenge. If our movies and T V shows have it right, the future will take place in Los Angeles during a steady drizzle (as if!), and will be peopled by cyberbeings who are slightly cooler than we are, seniors to our freshmen. They're freakishly strong and whizzes at motorcycle riding and long division, but they yearn to be human, to be more like us. Inevitably, the most human-seeming android stumbles into a lab stocked with trial iterations of itself and realizes, with horror, that it's not a person but a widget. In "Blade Runner," Rachael (Sean Young), a next-generation replicant, doesn't know she's one until she fails the inflammatory Voight-Kampff test, given her by Deckard (Harrison Ford). The film's director, Ridley Scott, has publicly disagreed with Ford about whether Deckard is himself a replicant. Scott insists that he is; Ford insists that he's not. Who wants to accept—even on behalf of his fictional character—that his free will is an illusion? The traditional way to grade ambitious machinery is the Turing test, which Alan Turing proposed in 1950: a true A.G.I, could fool human judges : into believing it was human. This standard assumes that the human brain is a kind of computer, and that all we need to do to create an A.G.I, is to mimic our mode of thinking; it also, very subtly, turns programmers into grifters. In typed exchanges, a chatbot masquerading as a thirteenyear-old Ukrainian named Eugene Goostman fooled a third of the judges at Turing Test 2014 by repeatedly changing the subject. Here, from a report in the Daily Beast, is the bot responding to one of Turing's original questions: INTERROGATOR: In the first line of a sonnet which reads 'Shall I compare thee to a summer's day,' wouldn't 'a spring day' be better? GOOSTMAN: What makes you ask me this? Sound like you can't choose a right answer yourself! Rely on your intuition! :-) Maybe, let's talk about something else? What would you like to discuss? INTERROGATOR: I'd like to talk about poetry. GOOSTMAN: Classics? I hope you aren't going to read "King Lear" or something like that to me right now :-))) Scriptwriters for digital assistants like Siri and Alexa deploy this sort of scatty banter in the hope of striking the "happy path" in voice-interface design, a middle way between stolid factuality and word salad. As one scriptwriter recently observed, "There is something quintessentially human about nonsensical conversations." But "Who's on First?" only tickles us if we sense a playful intelligence at work. Mustering one in code is a multi-front challenge. The authors of an April paper on generating poems from photographic images conclude that—even when you activate two discriminative networks that train a recurrent neural network, and link them to a deep coupled visual-poetic embedding model consisting of a skip-thought model, a part-of-speech parser, and a convolutional neural network—writing poems is hard. "For example," they mournfully note," 'man' detected in image captioning can further indicate 'hope' with 'bright sunshine' and 'opening arm,' or 'loneliness' with 'empty chairs' and 'dark background." But at least we've narrowed the problem down to explaining hope and loneliness. "Common Sense, the Turing Test, and the Quest for Real AI," by Hector J. Levesque, an emeritus professor of computer science, suggests that a better test would be whether a computer can figure out Winograd Schemas, which hinge on ambiguous pronouns. For example: "The trophy would not fit in the brown suitcase because it was so small. What was so small?" We instantly grasp that the problem is the suitcase, not the trophy; A.I.s lack the necessary linguistic savvy and mother wit. Intelligence may indeed be a kind of common sense: an instinct for how to proceed in novel or confusing situations. In Alex Garland's film "Ex Machina," Nathan, the founder of a tech behemoth akin to Google, disparages the Turing test and its ilk and invites a young coder to talk face to face with Nathan's new android, Ava. "The real test is to show you that she's a robot," Nathan says, "and then see if you still feel she has consciousness." She does have consciousness, but, being exactly as amoral as her creator, she has no conscience; Ava deceives and murders both Nathan and the coder to gain her freedom. We don't think to test for what we don't greatly value. Onscreen, the consciousness of A.I.S is a given, achieved in a manner as emergent and unexplained as the blooming of our own consciousness. In Spike Jonze's "Her," the sad sack Theodore falls for his new operating system. "You seem like a person," he says, "but you're just a voice in a computer." It teasingly replies, "I can understand how the limited perspective of an unartificial mind would perceive it that way." In "I, Robot," Will Smith asks a robot named Sonny, "Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?" Sonny replies, "Can you?" A.I. gets all the good burns. Screenwriters tend to believe that ratiocination is kid stuff, and that A.I.S won't really level up until they can cry. In "Blade Runner," the replicants are limited to four-year life spans so that they don't have time to develop emotions (but they do, beginning with fury at the four-year limit). In the British show "Humans," Niska, a "Synth" who's secretly become conscious, refuses to turn off her pain receptors, snarling, "I was meant to feel." If you prick us, do we not bleed some sort of azure goo? In Steven Spielberg's "A.I. Artificial Intelligence," the emotionally damaged scientist played by William Hurt declares of robots, "Love will be the key by which they acquire a kind of subconscious never before achieved—an inner world of metaphor, of intuition… of dreams." Love is also how we imagine that Pinocchio becomes a real live boy and the Velveteen Rabbit a real live bunny. In the grittier "Westworld," the HBO show about a Wild West amusement park populated by cyborgs whom people are free to fuck and kill, Dr. Robert Ford, the emotionally damaged scientist played by Anthony Hopkins, tells his chief coder, Bernard (who's been unaware that he, too, is a cyborg), that "your imagined suffering makes you lifelike" and that "to escape this place you will need to suffer more"—a world view borrowed not from children's stories but from : religion. What makes us human is doubt, fear, and shame, all the allotropes of unworthiness. An android capable of consciousness and emotion is much more than a gizmo, and raises the question of what duties we owe to programmed beings, and they to us. If we grow dissatisfied with a conscious A.G.I, and unplug it, would that be murder? In "Terminator 2," Sarah Connor realizes that the Terminator played by Arnold Schwarzenegger, sent back in time to save her son from the Terminator played by Robert Patrick, is menschier than any of the men she's hooked up with. He's strong, resourceful, and loyal: "Of all the would-be fathers who came and went over the years, this thing, this machine, was the only one who measured up." At the end, the Terminator even lowers itself into a molten pool so no nosy parker can study its technology and reverse-engineer another Terminator. Fortunately, human ingenuity found a way to extend the franchise with three more films nonetheless. Evolutionarily speaking, screenwriters have it backward: our feelings preceded and gave birth to our thoughts. This may explain why we suck at logic— some ninety per cent of us fail the elementary Wason selection task—and rigorous calculation. In the incisive "Life 3.0: Being Human in the Age of Artificial Intelligence," Max Tegmark, a physics professor at M.I.T. who co-founded the Future of Life Institute, suggests that thinking isn't what we think it is: A living organism is an agent of bounded rationality that doesn't pursue a single goal, but instead follows rules of thumb for what to pursue and avoid. Our human minds perceive these evolved rules of thumb as feelings, which usually (and often without us being aware of it) guide our decision making toward the ultimate goal of replication. Feelings of hunger and thirst protect us from starvation and dehydration, feelings of pain protect us from damaging our bodies, feelings of lust make us procreate, feelings of love and compassion make us help other carriers of our genes and those who help them and so on. Rationalists have long sought to make reason as inarguable as mathematics, so that, as Leibniz put it, "there would be no more need of disputation between two philosophers than between two accountants." But our decision-making process is a patchwork of kludgy code that hunts for probabilities, defaults to hunches, and is plunged into system error by unconscious impulses, the anchoring effect, loss aversion, confirmation bias, and a host of other irrational framing devices. Our brains aren't Turing machines so much as a slop of systems cobbled together by eons of genetic mutation, systems geared to notice and respond to perceived changes in our environment— change, by its nature, being dangerous. The Texas horned lizard, when threatened, shoots blood out of its eyes; we, when threatened, think. That ability to think, in turn, heightens the ability to threaten. Artificial intelligence, like natural intelligence, can be used to hurt as easily as to help. A moderately precocious twelve-year-old could weaponize the Internet of Things—your car or thermostat or baby monitor—and turn it into the Internet of Stranger Things. In "Black Mirror," the anthology show set in the near future, A.I. tech that's intended to amplify laudable human desires, such as the wish for perfect memory or social cohesion, invariably frog-marches us toward conformity or fascism. Even small A.I. breakthroughs, the show suggests, will make life a joyless panoptic lab experiment. In one episode, autonomous drone bees—tiny mechanical insects that pollinate flowers— are hacked to assassinate targets, using facial recognition. Far-fetched? Well, Walmart requested a patent for autonomous "pollen applicators" in March, and researchers at Harvard have been developing RoboBees since 2009. Able to dive and swim as well as fly, they could surely be programmed to swarm the Yale graduation. In a recent paper, "The Malicious Use of Artificial Intelligence," watchdog groups predict that, within five years, hacked autonomous-weapon systems, as well as "drone swarms" using facial recognition, could target civilians. Autonomous weapons are already on a Strangelovian course: the Phalanx CIWS on U.S. Navy ships automatically fires its radar-guided Gatling gun at missiles that approach within two and a half miles, and the scope and power of such systems will only increase as militaries seek defenses against robots and rovers that attack too rapidly for humans to parry. Even now, facial-recognition technology underpins China's "sharp eyes" program, which collects surveillance footage from some fifty-five cities and will likely factor in the nation's nascent Social Credit System. By 2020, the system will render a score for each of its 1.4 billion citizens, based on their observed behavior, down to how carefully they cross the street. Autocratic regimes could readily exploit the ways in which A.I.s are beginning to jar our sense of reality. Nvidia's digital-imaging A.I., trained on thousands of photos, generates real-seeming images of buses, bicycles, horses, and even celebrities (though, admittedly, the "celebrities" have the generic look of guest stars on "NCIS"). When Google made its TensorFlow code open-source, it swiftly led to FakeApp, which enables you to convincingly swap someone's face onto footage of somebody else's body—usually footage of that second person in a naked interaction with a third person. A.I.s can also generate entirely fake video synched up to real audio—and "real" audio is even easier to fake. Such tech could shape reality so profoundly that it would explode our bedrock faith in "seeing is believing" and hasten the advent of a full-time- : surveillance/ full-on-paranoia state. Vladimir Putin, who has stymied the U.N.'s efforts to regulate autonomous weapons, recently told Russian schoolchildren that "the future belongs to artificial intelligence" and that "whoever becomes the leader in this sphere will become the ruler of the world." In "The Sentient Machine: The Coming Age of Artificial Intelligence," Amir Husain, a security-software entrepreneur, argues that "a psychopathic leader in control of a sophisticated ANI system portends a far greater risk in the near term" than a rogue A.G.I. Usually, those who fear what's called "accidental misuse" of A.I., in which the machine does something we didn't intend, want to regulate the machines, while those who fear "intentional misuse" by hackers or tyrants want to regulate people's access to the machines. But Husain argues that the only way to deter intentional misuse is to develop bellicose A.N.I. of our own: "The 'choice' is really no choice at all: we must fight AI with AI." If so, A.I. is already forcing us to develop stronger A.I. The villain in A.G.I.-run-amok entertainments is, customarily, neither a human nor a machine but a corporation: Tyrell or Cyberdyne or Omni Consumer Products. In our world, an ungovernable A.G.I. is less likely to come from Russia or China (although China is putting enormous resources into the field) than from Google or Baidu. Corporations pay developers handsomely, and they lack the constitutional framework that occasionally makes a government hesitate before pushing the big red "Dehumanize Now" button. Because it will be much easier and cheaper to build the first A.G.I, than to build the first safe A.G.I., the race seems destined to go to whichever company assembles the most ruthless task force. Demis Hassabis, who runs Google's DeepMind, once designed a video game called Evil Genius in which you kidnap and train scientists to create a doomsday machine so you can achieve world domination. Just sayin'. Must A.G.I.s themselves become Bond villains? Hector Levesque argues that, "in imagining an aggressive AI, we are projecting our own psychology onto the artificial or alien intelligence." In truth, we're projecting our entire mental architecture. The breakthrough propelling many recent advances in A.I. is the deep neural net, modelled on our nervous system. This month, the E.U., trying to clear a path through the "boosted decision trees" that populate the "random forests" of the machine-learning kingdom, will begin requiring that judgments made by a machine be explainable. The decision-making of deep-learning A.I.s is a "black box" after an algorithm chooses whom to hire or whom to parole, say, it can't lay out its reasoning for us. Regulating the matter sounds very sensible and European—but no one has proposed a similar law for humans, whose decision-making is far more opaque. Meanwhile, Europe's $1.3 billion Human Brain Project is attempting to simulate the brain's eighty-six billion neurons and up to a quadrillion synapses in the hope that "emergent structures and behaviours" might materialize. Some believe that "whole-brain emulation," an intelligence derived from our squishy noggins, would be less threatening than an A.G.I, derived from zeros and ones. But, as Stephen Hawking observed when he warned against seeking out aliens, "We only have to look at ourselves to see how intelligent life might develop into something we wouldn't want to meet." In a classic episode of the original "Star Trek" series, the starship Enterprise is turned over to the supercomputer M5. Captain Kirk resists, intuitively, even before M5 overreacts during training exercises and attacks the "enemy" ships. The computer's paranoia derived from its programmer, who had impressed his own "human engrams" (a kind of emulated brain, presumably) onto it in order to make it think. As the other ships prepare to destroy the Enterprise, Kirk coaxes M5 into realizing that, in protecting itself, it has become a murderer. M5 promptly commits suicide, proving the value of one man's intuition—and establishing that the machine wasn't all that bright to begin with. Lacking human intuition, A.G.I, can do us harm in the effort to oblige us. If we tell an A.G.I, to "make us happy," it may simply plant orgasmgiving electrodes in our brains and turn to its own pursuits. The threat of "misaligned goals"—a computer interpreting its program all too literally—hangs over the entire A.G.I. enterprise. We now use reinforcement learning to train computers to play games without ever teaching them the rules. Yet an A.G.I. trained in that manner could well view existence itself as a game, a buggy version of the Sims or Second Life. In the 1983 film "WarGames,"one of the first, and best, treatments of this issue, the U.S. military's supercomputer, WOPR, fights the Third World War "as a game, time and time again," ceaselessly seeking ways to improve its score. When you give a machine goals, you've also given it a reason to preserve itself: how else can it do what you want? No matter what goal an A.G.I. has, one of ours or one of its own—self-preservation, cognitive enhancement, resource acquisition— it may need to take over in order to achieve it. "2001" had HAL, the spaceship's computer, deciding that it had to kill all the humans aboard because "this mission is too important for me to allow you to jeopardize it." In "I, Robot," v i k i explained that the robots have to take charge because, "despite our best efforts, your countries wage wars, you toxify your Earth, and pursue ever more imaginative means of self-destruction." In the philosopher Nick Bostrom's now famous example, an A.G.I, intent on maximizing the number of paper clips it can make would consume all the matter in the galaxy to make paper clips and would eliminate anything that interfered with its achieving that goal, including us. "The Matrix" spun an elaborate version of this : scenario: the A.I.s built a dreamworld in order to keep us placid as they fed us on the liquefied remains of the dead and harvested us for the energy they needed to run their programs. Agent Smith, the humanized face of the A.I.s, explained, "As soon as we started thinking for you, it really became our civilization." The real risk of an A.G.I., then, may stem not from malice, or emergent self-consciousness, but simply from autonomy. Intelligence entails control, and an A.G.I, will be the apex cogitator. From this perspective, an A.G.I., however well intentioned, would likely behave in a way as destructive to us as any Bond villain. "Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb," Bostrom writes in his 2014 book, "Super-intelligence," a closely reasoned, cumulatively terrifying examination of all the ways in which we're unprepared to make our masters. A recursive, self-improving A.G.I. won't be smart like Einstein but "smart in the sense that an average human being is smart compared with a beetle or a worm." How the machines take dominion is just a detail: Bostrom suggests that "at a preset time, nanofactories producing nerve gas or target-seeking mosquito-like robots might then burgeon forth simultaneously from every square meter of the globe." That sounds screenplay-ready—but, ever the killjoy, he notes, "In particular, the AI does not adopt a plan so stupid that even we present-day humans can foresee how it would inevitably fail. This criterion rules out many science fiction scenarios that end in human triumph." If we can't control an A.G.I., can we at least load it with beneficent values and insure that it retains them once it begins to modify itself? Max Tegmark observes that a woke A.G.I, may well find the goal of protecting us "as banal or misguided as we find compulsive reproduction." He lays out twelve potential "AI Aftermath Scenarios," including "Libertarian Utopia," "Zookeeper," "1984," and "Self-Destruction." Even the nominally preferable outcomes seem worse than the status quo. In "Benevolent Dictator," the A.G.I, "uses quite a subtle and complex definition of human flourishing, and has turned Earth into a highly enriched zoo environment that's really fun for humans to live in. As a result, most people find their lives highly fulfilling and meaningful." And more or less indistinguishable from highly immersive video games or a simulation. Trying to stay optimistic, by his lights—bear in mind that Tegmark is a physicist—he points out that an A.G.I, could explore and comprehend the universe at a level we can't even imagine. He therefore encourages us to view ourselves as mere packets of information that A.I.s could beam to other galaxies as a colonizing force. "This could be done either rather lowtech by simply transmitting the two gigabytes of information needed to specify a person's DNA and then incubating a baby to be raised by the AI, or the AI could nanoassemble quarks and electrons into full-grown people who would have all the memories scanned from their originals back on Earth." Easy peasy. He notes that this colonization scenario should make us highly suspicious of any blueprints an alien species beams at us. It's less clear why we ought to fear alien blueprints from another galaxy, yet embrace the ones we're about to bequeath to our descendants (if any). A.G.I, may be a recurrent evolutionary cul-de-sac that explains Fermi's paradox: while conditions for intelligent life likely exist on billions of planets in our galaxy alone, we don't see any. Tegmark concludes that "it appears that we humans are a historical accident, and aren't the optimal solution to any well-defined physics problem. This suggests that a superintelligent AI with a rigorously defined goal will be able to improve its goal attainment by eliminating us." Therefore, "to program a friendly AI, we need to capture the meaning of life." Uh-huh. In the meantime, we need a Plan B. Bostrom's starts with an effort to slow the race to create an A.G.I. in order to allow more time for precautionary trouble-shooting. Astoundingly, however, he advises that, once the A.G.I. arrives, we give it the utmost possible deference. Not only should we listen to the machine; we should ask it to figure out what we want. The misalignment-of-goals problem would seem to make that extremely risky, but Bostrom believes that trying to negotiate the terms of our surrender is better than the alternative, which is relying on ourselves, "foolish, ignorant, and narrow-minded that we are." Tegmark also concludes that we should inch toward an A.G.I. It's the only way to extend meaning in the universe that gave life to us: "Without technology, our human extinction is imminent in the cosmic context of tens of billions of years, rendering the entire drama of life in our Universe merely a brief and transient flash of beauty." We are the analog prelude to the digital main event. So the plan, after we create our own god, would be to bow to it and hope it doesn't require a blood sacrifice. An autonomous-car engineer named Anthony Levandowski has set out to start a religion in Silicon Valley, called Way of the Future, that proposes to do just that. After "The Transition," the church's believers will venerate "a Godhead based on Artificial Intelligence." Worship of the intelligence that will control us, Levandowski told a Wired reporter, is the only path to salvation; we should use such wits as we have to choose the manner of our submission. "Do you want to be a pet or livestock?" he asked. I'm thinking, I'm thinking… PHOTO (COLOR): An A.I. system may need to take charge in order to achieve the goals we gave it. : PHOTO (BLACK & WHITE): "Please, Melissa, just give him your cashmere!" PHOTO (BLACK & WHITE): "A number of items on that menu are consistently chosen by an overwhelming majority of the American people. PHOTO (BLACK & WHITE): "I'm afraid it's not cheese—it's 1cheese-like.'" PHOTO (BLACK & WHITE): "It's not personal. The boss just doesn't like seeing people in so much debt for such a useless degree." PHOTO (BLACK & WHITE) Copyright of New Yorker is the property of Conde Nast Publications and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. Research Help More Ways to Search Library Services University Links Getting Started Guide A - Z Database List Interlibrary Loan mySNHU Search Tips Find Books & eBooks Off-Campus Library Services Brightspace Video Tutorials Find Scholarly Articles & Reports About the Library Request Info Citing Sources Find New & Current Topics Library Hours Contact SNHU Southern New Hampshire University | 2500 North River Road, Manchester, NH 03106 | 603.645.9605 EBSCO Connect Privacy Policy A/B Testing Terms of Use Copyright powered by EBSCOhost : © 2021 EBSCO Industries, Inc. All rights reserved. Cookie Policy Contact Us Disclaimer: This is a machine generated PDF of selected content from our products. This functionality is provided solely for your convenience and is in no way intended to replace original scanned PDF. Neither Cengage Learning nor its licensors make any representations or warranties with respect to the machine generated PDF. The PDF is automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. CENGAGE LEARNING AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the machine generated PDF is subject to all use restrictions contained in The Cengage Learning Subscription and License Agreement and/or the Gale In Context: Opposing Viewpoints Terms and Conditions and by using the machine generated PDF functionality you agree to forgo any and all claims against Cengage Learning or its licensors for your use of the machine generated PDF functionality and any output derived therefrom. Can the 'immortal cells' of Henrietta Lacks sue for their own rights? Author: DeNeen L. Brown Date: June 25, 2018 From: The Washington Post Publisher: The Washington Post Document Type: Article Length: 1,769 words Content Level: (Level 5) Lexile Measure: 1470L Full Text: Byline: DeNeen L. Brown A lawyer representing the eldest son and two grandsons of Henrietta Lacks, whose "immortal cells" have been the subject of a bestselling book, a TV movie, a family feud, cutting-edge medical research and a multibillion-dollar biotech industry, announced last week that she plans to file a petition seeking "guardianship" of the cells. "The question we are dealing with is 'Can the cells sue for mistreatment, misappropriation, theft and for the profits earned without their consent?' " said Christina J. Bostick, who is representing Lawrence Lacks, the eldest son of Lacks, and grandsons Lawrence Lacks Jr. and Ron Lacks. Bostick said the now-famous cells were taken without consent from Lacks, who was African-American, during a 1951 visit to Johns Hopkins Hospital in Baltimore, which was racially segregated at the time. Lawrence Lacks, the executor of Lacks's estate, said the family did not know until many years after his mother died that her cells were living in test tubes in science labs across the world. Because the statute of limitations for medical malpractice expired years ago, Bostick said, she has resorted to "creative litigation" to help family members regain some kind of control of their mother's cells that have been reproduced billions of times for medical research. Bostick, who has represented Lawrence Lacks and his sons for more than a year, plans to file the petition for guardianship of the cells in July in Baltimore County, where Lacks's estate resides. The petition will not include ownership, Bostick said. The question of who owns the cells, she said, is complicated. "I think the answer is no one legally owns the cells as one whole entity," she said. Bostick said the cells can be purchased on an open market, "so the purchaser owns the rights to the cells it acquires." Johns Hopkins has said it claims no ownership rights of the cells "because the cells cannot legally be patented," Bostick said. The National Institutes of Health regulates the use of the human genome completed based on the cells, she said. The cells were retrieved from Henrietta Lacks, a housewife and young mother of five children, in 1951 when she went to Johns Hopkins Hospital in Baltimore for bleeding. Doctors discovered a malignant tumor on her cervix and collected cells from the tumor without her knowledge or consent, according to a report by Johns Hopkins Medicine titled "The Legacy of Henrietta Lacks." Lacks died on Oct. 4, 1951, at 31, but her cells continued to live. Scientists in the lab discovered to their amazement that unlike the cells they had collected in other experiments, which expired almost immediately outside the human body, Lacks's cells thrived and in fact doubled in growth every 20 to 24 hours, according to a report by Johns Hopkins Medicine. The line of cells - which scientists nicknamed "HeLa" cells after the first two letters of Lacks's first and last names - would go on to contribute to significant advances in scientific research, lead to two Nobel Prizes in research, and the development of vaccines, cancer treatments, in vitro fertilization and a genome sequence that was published last year. The cells have been used in the research of toxins, hormones and viruses and to study the effects of radiation and the development of the polio vaccine. "There are 17,000 U.S. patents that involve HeLa cells, which are theoretically continuing to make money," Bostick said. In 2017, Johns Hopkins University released a statement denying that it had profited from the cells. "Johns Hopkins never patented HeLa cells, and therefore does not own the rights to the HeLa cell line," the statement said. Hopkins explained that when the cells were taken from Lacks in 1951, there was no established protocol for informing patients or getting consent for research of cell or tissue specimens. "Today, Johns Hopkins and other medical research centers maintain strict patient consent processes for tissue and cell donation," Johns Hopkins said. But scientists across the world have used the cells in research. In 2013, scientists in Germany published a paper announcing they had sequenced the entire genome of a HeLa cell, "essentially putting Lacks's DNA sequence up on the internet for all to see," according to the Guardian newspaper. "Amazingly, they failed to alert anyone in the Lacks family about their intentions or ask their permission." That year, in 2013, the National Institutes of Health announced that two members of the Lacks family would sit on the panel that reviews applications for the genome data and would control access to HeLa cells. The agreement did not include financial compensation for Lacks's descendants. Bostick says the argument to establish guardianship of the cells is because there are no other human cells discovered to go on living outside the human body. This makes it difficult to establish legal precedent for the case. But in her research, Bostick said, she found two case that might be considered precedent. In 2004, a Florida appeals court panel ruled that Gov. Jeb Bush could not appoint a guardian for the fetus of a developmentally disabled woman who had been raped and impregnated by a staff member at a state-run group home in Orlando. The judge said that if the legislature had decided that a fetus was entitled to guardianship protection, it would have so legislated. The second is a 2017 case in Michigan in which a court denied a motion by a woman who sought custody of frozen embryos created with her partner before their relationship ended in 2013. The woman wanted to be implanted with an embryo "in order to give birth to a healthy child and then use stem cells from the child's umbilical cord for transplantation" to their older, natural-born daughter who had been diagnosed with sickle cell disease, according to the ruling. The court decided that the case was a contract dispute. The petition for guardianship of the HeLa cells would differ from arguments in those two cases because embryos and fetuses cannot survive outside the womb unless frozen. "Specifically, the embryo or fetus requires a human mother in order to grow," Bostick said. "At this time, the HeLa cells do not require a mother to grow or relocate." The petition would argue that a "guardianship should be appointed to speak in the best interest of a person who is not competent or otherwise to protect their property," Bostick said. "I can approach it as saying Henrietta Lacks is a person, who is continuing to be represented in life by her cells, or that Henrietta's cells themselves are Henrietta Lacks and in so doing she is still living, or her cells are the property of the estate because they belong to her and require protection because she is now deceased and cannot speak on her behalf for her property," she said. Another argument, Bostick said, might be that "the cells themselves have their own identity, independent of the deceased person they came from. I want to give the court as many opportunities to say yes as I can." Eventually, Bostick said, in addition to obtaining guardianship of the cells, she hopes family members will be compensated from the profits the cells generate. The family members have not received profits gained from the research of the cells, nor have they received adequate compensation from the book, "The Immortal Life of Henrietta Lacks," or from the HBO movie, Lawrence Lacks said. The book was written by Rebecca Skloot with the help of Deborah Lacks, a daughter of Henrietta Lacks. Five family members served as paid consultants to the movie, according to a 2017 Washington Post interview. Lawrence Lacks refused to consult on the film, Bostick said, "having been advised by prior counsel not to participate and therefore did not receive compensation." Lawrence Lacks, 85, said during a panel discussion at Busboys and Poets in Washington last week that "I did not want to sell rights to my life." He said he disagreed with the way the family was portrayed, though some family members have endorsed the book and movie. He told The Post in a 2017 interview that he was also unhappy with the way some relatives continue to profit from it by giving speeches across the country. Veronica Spencer, a great-granddaughter of Henrietta Lacks, and her cousin, David Lacks Jr., were selected by other family members to serve on the NIH working group that reviews requests from researchers to use the HeLa, according to the 2017 Post interview. In 2017, Lawrence and Ron asked that the Henrietta Lacks Foundation, established and funded mostly by Skloot, be transferred to their control. They also demanded that "HBO and Winfrey's Harpo Films donate $10 million each to a new foundation started in Lawrence's name, and that a speakers' agency stop booking other family members for appearances without Lawrence's approval," according to a 2017 Post interview. NIH told The Post in 2017 that it would refuse to get involved in a family dispute. Attorneys for Skloot responded to the allegations by saying there is case law that would establish that Lawrence and Ron have no authority over other family members speaking about Henrietta Lacks at public forums. Deborah Lacks, who is portrayed by Oprah Winfrey in the movie, died in 2009 at age 59. Bostick said she and Lawrence Lacks have tried to organize family members into a meeting regarding the guardianship petition. "We have so far been unsuccessful," she said. During the panel discussion, Lawrence Lacks said he is still distraught over what happened to his mother at the hospital. For many years, he said, the pain was so heavy he could not talk about it. He broke into tears on stage. "I want to go back and put everything on paper," he said, "so I can remember it." Ron Lacks, 59, said in an interview: "My father just wants to have some control over what has happened in the past. Even on our family story, we have been shortchanged. . . . The family story, we don't even own that." "It's not all about the money. My family has had no control of the family story, no control of Henrietta's body, no control of Henrietta's cells, which are still living and will make some more tomorrow." Read more Retropolis: A surgeon experimented on slave women without anesthesia. Now his statues are under attack. 'You've got bad blood': The horror of the Tuskegee syphilis experiment Guinea pigs or pioneers? How Puerto Rican women were used to test the birth control pill. The first woman to start a bank - a black woman - finally gets her due in the Confederacy's capital Copyright: COPYRIGHT 2018 The Washington Post http://www.washingtonpost.com/ Source Citation (MLA 8th Edition) Brown, DeNeen L. "Can the 'immortal cells' of Henrietta Lacks sue for their own rights?" Washington Post, 25 June 2018. Gale In Context: Opposing Viewpoints, link.gale.com/apps/doc/A544275656/OVIC?u=nhc_main&sid=OVIC&xid=7bb0d544. Accessed 10 May 2021. Gale Document Number: GALE|A544275656

Option 1

Low Cost Option
Download this past answer in few clicks

16.89 USD

PURCHASE SOLUTION

Already member?


Option 2

Custom new solution created by our subject matter experts

GET A QUOTE