Fill This Form To Receive Instant Help
Homework answers / question archive / Consider the following case: Amazon’s Artificial Intelligence (AI) Recruiting Tool Amazon’s Human Resources department was about to embark on a hiring spree: Since June 2015, the company’s global headcount has more than tripled to 575,700 workers
Consider the following case: Amazon’s Artificial Intelligence (AI) Recruiting Tool
Amazon’s Human Resources department was about to embark on a hiring spree: Since June 2015, the company’s global headcount has more than tripled to 575,700 workers... So it set up a team... Their goal was to develop AI that could rapidly crawl the web and spot candidates worth recruiting…
The group created 500 computer models focused on specific job functions and locations. They taught each to recognize some 50,000 terms that showed up on past candidates’ resumes. The algorithms learned to assign little significance to skills that were common across IT applicants, such as the ability to write various computer codes… The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars… “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five” …
In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges… [T]he technology favored candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured”…
Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory …
(Source: Dastin, J. Reuters. October 9, 2018)
Please create a thread and post 2 paragraphs in this Discussion Forum. Include the following.
1st paragraph: As an ethical top executive at Amazon, how could you be most effective in designing policies for employees to use AI automated employment screening while observing gender bias issues? What would you say and do? Use concepts from the textbook, materials at Blackboard, and Dr. Siqueira'a video.
2nd paragraph – What are the underlying values that drove your position? What is your reasoning, which you could share with employees and stakeholders, to justify your policies? Use concepts from the textbook, materials at Blackboard, and Dr. Siqueira'a video.
Since the conversation is around employment issues and hiring and firing of the employees, the automation can use machine learning to create a fair hiring process. The issue might be centered around the biases which might happen and to prevent that machine learning coding can be done in a way so that gender-biased terms can be excluded from the code so as to prevent the biases from entering the system and causing a bigger issue. The AI can be programmed to have some predetermined ability to notice any person, (no matter the gender) who is fit for the position they wish to apply for. I would personally code the first few AI myself to make sure there is nothing to do with gender bias/ discrimination within the codes of the AI thus, making it a fair environment for anyone to get a job.
The underlying values I have are integrity and ethics that go along with equality in the workplace can be summarised as doing the right thing in a fair, honest, and responsible manner. Integrity and honesty go a long way toward establishing a solid foundation for your company's relationships with its employees, stakeholders, and customers. Primarily, when focusing on employees and the hiring process, I want to promote a welcome environment and have anyone feel like they can have a chance at getting a job and not feel discriminated because they are "not a man" or "that's a woman's job" because that is nonsense. The AI will be professionally programmed to promote a healthy environment that is filled with integrity and is ethical in regards to implicit gender biases.