Fill This Form To Receive Instant Help

Help in Homework
trustpilot ratings
google ratings


Homework answers / question archive / (1) Implement the PageRank-Iterate algorithm on Slide #65 of the lecture “Link Analysis” on the following graph

(1) Implement the PageRank-Iterate algorithm on Slide #65 of the lecture “Link Analysis” on the following graph

Computer Science

(1) Implement the PageRank-Iterate algorithm on Slide #65 of the lecture “Link Analysis” on the following graph. Use d=0.85 and set e to a value that to allow the algorithm to converge in a few rounds (like 5-20).

(2) Solve the problem described on Slides #25-#26 of the lecture “Decision Theory I”.

I need the code in Python + Report in Word/LaTeX. Attached are slides pertaining to the 2 questions

(1) 15 points. Implement the PageRank-Iterate algorithm on Slide #65 of the lecture “Link Analysis” on the following graph. Use d=0.85 and set ? to a value that to allow the algorithm to converge in a few rounds (like 5-20). (2) 5 points. Solve the problem described on Slides #25-#26 of the lecture “Decision Theory I”. COSC 621 Data Science Link Analysis (Graphs and Networks) Note: adapted from the original slides by Prof. Bing Liu, University of Illinois at Chicago. Road map ? ? ? ? ? ? Introduction Social network analysis Co-citation and bibliographic coupling PageRank HITS Summary 2 Introduction ? Early search engines mainly compare content similarity of the query and the indexed pages. I.e., ? ? They use information retrieval methods, cosine, TF-IDF, ... From 1996, it became clear that content similarity alone was no longer sufficient. ? ? The number of pages grew rapidly in the mid-late 1990’s. ? Try “classification technique”, Google estimates: 10 million relevant pages. ? How to choose only 30-40 pages and rank them suitably to present to the user? Content similarity is easily spammed. ? A page owner can repeat some words and add many related words to boost the rankings of his pages and/or to make the pages relevant to a large number of queries. 3 Introduction (cont …) ? Starting around 1996, researchers began to work on the problem. They resort to hyperlinks. ? ? Web pages on the other hand are connected through hyperlinks, which carry important information. ? ? ? In Feb, 1997, Yanhong Li (Robin Li), Scotch Plains, NJ, filed a hyperlink based search patent. The method uses words in anchor text of hyperlinks. Some hyperlinks: organize information at the same site. Other hyperlinks: point to pages from other Web sites. Such out-going hyperlinks often indicate an implicit conveyance of authority to the pages being pointed to. Those pages that are pointed to by many other pages are likely to contain authoritative information. 4 Introduction (cont …) ? ? During 1997-1998, two most influential hyperlink based search algorithms PageRank and HITS were reported. Both algorithms are related to social networks. They exploit the hyperlinks of the Web to rank pages according to their levels of “prestige” or “authority”. ? ? ? HITS: Jon Kleinberg (Cornel University), at Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, January 1998 PageRank: Sergey Brin and Larry Page, PhD students from Stanford University, at Seventh International World Wide Web Conference (WWW7) in April, 1998. PageRank powers the Google search engine. 5 Introduction (cont …) ? Apart from search ranking, hyperlinks are also useful for finding Web communities. ? ? A Web community is a cluster of densely linked pages representing a group of people with a special interest. Beyond explicit hyperlinks on the Web, links in other contexts are useful too, e.g., ? ? for discovering communities of named entities (e.g., people and organizations) in free text documents, and for analyzing social phenomena in emails.. 6 Road map ? ? ? ? ? ? Introduction Social network analysis Co-citation and bibliographic coupling PageRank HITS Summary 7 Introduction to graphs/networks (refresher) 8 Graph representation 9 Degree, walk, path, and cycle 10 Other terminologies 11 Social network analysis ? ? Social network is the study of social entities (people in an organization, called actors), and their interactions and relationships. The interactions and relationships can be represented with a network or graph, ? ? ? ? each vertex (or node) represents an actor and each link represents a relationship. From the network, we can study the properties of its structure, and the role, position and prestige of each social actor. We can also find various kinds of sub-graphs, e.g., communities formed by groups of actors. 12 Social network and the Web ? Social network analysis is useful for the Web because the Web is essentially a virtual society, and thus a virtual social network, ? ? ? ? Each page: a social actor and each hyperlink: a relationship. Many results from social network can be adapted and extended for use in the Web context. We study two types of social network analysis, centrality and prestige, which are closely related to hyperlink analysis and search on the Web. 13 Centrality ? ? ? Important or prominent actors are those that are linked or involved with other actors extensively. A person with extensive contacts (links) or communications with many other people in the organization is considered more important than a person with relatively fewer contacts. The links can also be called ties. A central actor is one involved in many ties. 14 Degree Centrality 15 Closeness Centrality 16 Betweenness Centrality ? ? If two non-adjacent actors j and k want to interact and actor i is on the path between j and k, then i may have some control over the interactions between j and k. Betweenness measures this control of i over other pairs of actors. Thus, ? if i is on the paths of many such interactions, then i is an important actor. 17 Betweenness Centrality (cont …) ? ? Undirected graph: Let pjk be the number of shortest paths between actor j and actor k. The betweenness of an actor i is defined as the number of shortest paths that pass i (pjk(i)) normalized by the total number of shortest paths. ? j ?k p jk (i) p jk (4) 18 Betweenness Centrality (cont …) 19 Example: Padgett's Florentine Families 20 Their Centrality Measures: D = degree C = closeness B = betweenness 21 Example 2: Terrorist Network Identifying the ring leader 22 Prestige ? Prestige is a more refined measure of prominence of an actor than centrality. ? ? A prestigious actor is one who is object of extensive ties as a recipient. ? ? To compute the prestige: we use only in-links. Difference between centrality and prestige: ? ? ? Distinguish: ties sent (out-links) and ties received (in-links). centrality focuses on out-links prestige focuses on in-links. We study three prestige measures. Rank prestige forms the basis of most Web page link analysis algorithms, including PageRank and HITS. 23 Degree prestige 24 Proximity prestige ? ? The degree index of prestige of an actor i only considers the actors that are adjacent to i. The proximity prestige generalizes it by considering both the actors directly and indirectly linked to actor i. ? ? ? ? We consider every actor j that can reach i. Let Ii be the set of actors that can reach actor i. The proximity is defined as closeness or distance of other actors to i. Let d(j, i) denote the distance from actor j to actor i. 25 Proximity prestige (cont …) 26 Prestige Example 27 28 Rank prestige ? In the previous two prestige measures, an important factor is considered, ? ? In the real world, a person i chosen by an important person is more prestigious than chosen by a less important person. ? ? the prominence of individual actors who do the “voting” For example, if a company CEO votes for a person is much more important than a worker votes for the person. If one’s circle of influence is full of prestigious actors, then one’s own prestige is also high. ? Thus one’s prestige is affected by the ranks or statuses of the involved actors. 29 Rank prestige (cont …) ? Based on this intuition, the rank prestige PR(i) is define as a linear combination of links that point to i: 30 Road map ? ? ? ? ? ? Introduction Social network analysis Co-citation and bibliographic coupling PageRank HITS Summary 31 Co-citation and Bibliographic Coupling ? Another area of research concerned with links is citation analysis of scholarly publications. ? ? When a paper cites another paper, a relationship is established between the publications. ? ? A scholarly publication cites related prior work to acknowledge the origins of some ideas and to compare the new proposal with existing work. Citation analysis uses these relationships (links) to perform various types of analysis. We discuss two types of citation analysis, cocitation and bibliographic coupling. The HITS algorithm is related to these two types of analysis. 32 Co-citation ? ? If papers i and j are both cited by paper k, then they may be related in some sense to one another. The more papers they are cited by, the stronger their relationship is. 33 Co-citation ? Let L be the citation matrix. Each cell of the matrix is defined as follows: ? ? Lij = 1 if paper i cites paper j, and 0 otherwise. Co-citation (denoted by Cij) is a similarity measure defined as the number of papers that co-cite i and j, Cij = n ?L ki Lkj , k =1 ? ? Cii is naturally the number of papers that cite i. A square matrix C can be formed with Cij, and it is called the co-citation matrix. 34 Bibliographic coupling ? ? Bibliographic coupling operates on a similar principle. Bibliographic coupling links papers that cite the same articles ? ? if papers i and j both cite paper k, they may be related. The more papers they both cite, the stronger their similarity is. 35 Bibliographic coupling (cont …) 36 Road map ? ? ? ? ? ? Introduction Social network analysis Co-citation and bibliographic coupling PageRank HITS Summary 37 PageRank ? ? ? The year 1998 was an eventful year for Web link analysis models. Both the PageRank and HITS algorithms were reported in that year. The connections between PageRank and HITS are quite striking. Since that eventful year, PageRank has emerged as the dominant link analysis model, ? ? ? due to its query-independence, its ability to combat spamming, and Google’s huge business success. 38 PageRank: the intuitive idea ? ? ? PageRank relies on the democratic nature of the Web by using its vast link structure as an indicator of an individual page's value or quality. PageRank interprets a hyperlink from page x to page y as a vote, by page x, for page y. However, PageRank looks at more than the sheer number of votes; it also analyzes the page that casts the vote. ? ? Votes casted by “important” pages weigh more heavily and help to make other pages more "important." This is exactly the idea of rank prestige in social network. 39 More specifically A hyperlink from a page to another page is an implicit conveyance of authority to the target page. ? ? The more in-links that a page i receives, the more prestige the page i has. Pages that point to page i also have their own prestige scores. ? ? ? A page of a higher prestige pointing to i is more important than a page of a lower prestige pointing to i. In other words, a page is important if it is pointed to by other important pages. 40 PageRank algorithm ? ? ? According to rank prestige, the importance of page i (i’s PageRank score) is the sum of the PageRank scores of all pages that point to i. Since a page may point to many other pages, its prestige score should be shared. The Web as a directed graph G = (V, E). Let the total number of pages be n. The PageRank score of the page i (denoted by P(i)) is defined by: P (i ) = ? ( j , i )?E P( j ) , Oj Oj is the number of out-link of j 41 Matrix notation ? ? ? ? We have a system of n linear equations with n unknowns. We can use a matrix to represent them. Let P be a n-dimensional column vector of PageRank values, i.e., P = (P(1), P(2), …, P(n))T. Let A be the adjacency matrix of our graph with ?1 if (i, j ) ? E ? Aij = ? Oi (14) ?? 0 otherwise We can write the n equations with (PageRank) T P=A P (15) 42 Solve the PageRank equation P=A P T ? ? ? ? (15) This is the characteristic equation of the eigensystem, where the solution to P is an eigenvector with the corresponding eigenvalue of 1. It turns out that if some conditions are satisfied, 1 is the largest eigenvalue and the PageRank vector P is the principal eigenvector. A well known mathematical technique called power iteration can be used to find P. Problem: the above Equation does not quite suffice because the Web graph does not meet the conditions. 43 Using Markov chain ? To introduce these conditions and the enhanced equation, let us derive the same Equation (15) based on the Markov chain. ? ? ? ? In the Markov chain, each Web page or node in the Web graph is regarded as a state. A hyperlink is a transition, which leads from one state to another state with a probability. This framework models Web surfing as a stochastic process. It models a Web surfer randomly surfing the Web as state transition. 44 Random surfing ? ? Recall we use Oi to denote the number of out-links of a node i. Each transition probability is 1/Oi if we assume the Web surfer will click the hyperlinks in the page i uniformly at random. ? ? The “back” button on the browser is not used and the surfer does not type in an URL. 45 Transition probability matrix ? Let A be the state transition probability matrix,, ? A11 ? ? A21 ? . A = ?. ? . ? ? . ? A ? n1 ? A12 A22 . . . An 2 . . . . . . . . . A1n ? ? A2 n ? . ? ? . ? ? . ? Ann ?? Aij represents the transition probability that the surfer in state i (page i) will move to state j (page j). Aij is defined exactly as in Equation (14). 46 Let us start ? Given an initial probability distribution vector that a surfer is at each state (or page) p0 = (p0(1), p0(2), …, p0(n))T (a column vector) and ? an n?n transition probability matrix A, we have n ? ? p (i) = 1 (16) ?A (17) 0 i =1 n ij =1 j =1 ? If the matrix A satisfies Equation (17), we say that A is the stochastic matrix of a Markov chain. 47 Back to the Markov chain ? In a Markov chain, a question of common interest is: ? ? Given p0 at the beginning, what is the probability that m steps/transitions later the Markov chain will be at each state j? We determine the probability that the system (or the random surfer) is in state j after 1 step (1 transition) by using the following reasoning: p1 ( j ) = n ? A (1) p (i) ij 0 (18) i =1 48 State transition 49 Stationary probability distribution ? By a Theorem of the Markov chain, ? ? a finite Markov chain defined by the stochastic matrix A has a unique stationary probability distribution if A is irreducible and aperiodic. The stationary probability distribution means that after a series of transitions pk will converge to a steady-state probability vector ? regardless of the choice of the initial probability vector p0, i.e., (21) lim pk = π k →? 50 PageRank again ? ? ? When we reach the steady-state, we have pk = pk+1 =?, and thus ? =AT?. ? is the principal eigenvector of AT with eigenvalue of 1. In PageRank, ? is used as the PageRank vector P. We again obtain Equation (15), which is re-produced here as Equation (22): P=A P T (22) 51 Is P = ? justified? ? Using the stationary probability distribution ? as the PageRank vector is reasonable and quite intuitive because ? ? it reflects the long-run probabilities that a random surfer will visit the pages. A page has a high prestige if the probability of visiting it is high. 52 Back to the Web graph ? Now let us come back to the real Web context and see whether the above conditions are satisfied, i.e., ? ? ? ? whether A is a stochastic matrix and whether it is irreducible and aperiodic. None of them is satisfied. Hence, we need to extend the ideal-case Equation (22) to produce the “actual PageRank” model. 53 A is a not stochastic matrix ? A is the transition matrix of the Web graph ?1 ? Aij = ? Oi ?? 0 ? if (i, j ) ? E otherwise It does not satisfy equation (17) n ?A ij =1 j =1 ? because many Web pages have no out-links, which are reflected in transition matrix A by some rows of complete 0’s. ? Such pages are called the dangling pages (nodes). 54 An example Web hyperlink graph 0 0 ? ? 0 12 12 0 ? ? 0 0 ? ?1 2 0 1 2 0 ? 0 1 0 0 0 0 ? ? A=? 0 1 3 0 1 3 1 3? ? 0 ? ? 0 0 0 0 0 0 ? ? ? 0 ? 0 0 1 2 1 2 0 ? ? 55 Fix the problem: two possible ways 1. 2. Remove those pages with no out-links during the PageRank computation as these pages do not affect the ranking of any other page directly. Add a complete set of outgoing links from each such page i to all the pages on the Web. Let us use the second way ? 0 12 ? ?1 2 0 ? 0 1 ? A= 0 ? 0 ? ?1 6 1 6 ? 0 0 ? 0 ? ? 0 ? 0 0 0 0 ? ? 1 3 0 1 3 1 3? ? 1 6 1 6 1 6 1 6? 0 1 2 1 2 0 ?? 12 12 0 0 0 0 56 A is a not irreducible Irreducible means that the Web graph G is strongly connected. Definition: A directed graph G = (V, E) is strongly connected if and only if, for each pair of nodes u, v ∈ V, there is a path from u to v. ? A general Web graph represented by A is not irreducible because ? ? ? for some pair of nodes u and v, there is no path from u to v. In our example, there is no directed path from nodes 3 to 4. 57 A is a not aperiodic A state i in a Markov chain being periodic means that there exists a directed cycle that the chain has to traverse. Definition: A state i is periodic with period k > 1 if k is the smallest number such that all paths leading from state i back to state i have a length that is a multiple of k. ? ? ? If a state is not periodic (i.e., k = 1), it is aperiodic. A Markov chain is aperiodic if all states are aperiodic. 58 An example: periodic ? Fig. 5 shows a periodic Markov chain with k = 3. Eg, if we begin from state 1, to come back to state 1 the only path is 1-2-3-1 for some number of times, say h. Thus any return to state 1 will take 3h transitions. 59 Deal with irreducible and aperiodic ? It is easy to deal with the above two problems with a single strategy. ? Add a link from each page to every page and give each link a small transition probability controlled by a parameter d. ? Obviously, the augmented transition matrix becomes irreducible and aperiodic 60 Improved PageRank After this augmentation, at a page, the random surfer has two options ? ? ? ? With probability d, he randomly chooses an out-link to follow. With probability 1-d, he jumps to a random page without a link. Equation (25) gives the improved model, E P = ((1 − d ) + dAT ) P n (25) where E is eeT (e is a column vector of all 1’s) and thus E is a n?n square matrix of all 1’s. 61 Follow our example If we set d=0.9, we have ?1 6 0 ? ? 7 15 ? 7 15 E (1 − d ) + dAT = ? n ?1 6 0 ? ?1 6 0 ?1 6 0 ? 1 6 1 60? ? 1 6 1 60? 7 15 1 6 0 19 6 0 1 6 1 6 0 ? ? 1 6 0 1 6 0 1 6 0 1 6 7 15 ? ? 1 6 0 1 6 0 19 6 0 1 6 7 15 ? 1 6 0 1 6 0 19 6 0 1 6 1 6 0 ?? 7 15 1 6 0 1 6 0 11 12 1 60 1 60 62 The final PageRank algorithm ? ? (1-d)E/n + dAT is a stochastic matrix (transposed). It is also irreducible and aperiodic If we scale Equation (25) so that eTP = n, P = (1 − d )e + dAT P ? (27) PageRank for each page i is P(i ) = (1 − d ) + d n ?A ji P ( j) (28) j =1 63 The final PageRank (cont …) ? (28) is equivalent to the formula given in the PageRank paper P( j ) P (i ) = (1 − d ) + d ? ( j , i )?E O j ? The parameter d is called the damping factor which can be set to between 0 and 1. d = 0.85 was used in the PageRank paper. 64 Compute PageRank ? Use the power iteration method 65 Advantages of PageRank ? Fighting spam. A page is important if the pages pointing to it are important. ? ? PageRank is a global measure and is query independent. ? ? Since it is not easy for Web page owner to add in-links into his/her page from other important pages, it is thus not easy to influence PageRank. PageRank values of all the pages are computed and saved off-line rather than at the query time. Criticism: Query-independence. It could not distinguish between pages that are authoritative in general and pages that are authoritative on the query topic. 66 Road map ? ? ? ? ? ? Introduction Social network analysis Co-citation and bibliographic coupling PageRank HITS Summary 67 HITS ? ? ? HITS stands for Hypertext Induced Topic Search. Unlike PageRank which is a static ranking algorithm, HITS is search query dependent. When the user issues a search query, ? ? HITS first expands the list of relevant pages returned by a search engine and then produces two rankings of the expanded set of pages, authority ranking and hub ranking. 68 Authorities and Hubs Authority: Roughly, a authority is a page with many in-links. ? ? The idea is that the page may have good or authoritative content on some topic and thus many people trust it and link to it. Hub: A hub is a page with many out-links. ? ? The page serves as an organizer of the information on a particular topic and points to many good authority pages on the topic. 69 Examples 70 The key idea of HITS ? ? ? A good hub points to many good authorities, and A good authority is pointed to by many good hubs. Authorities and hubs have a mutual reinforcement relationship. Fig. 8 shows some densely linked authorities and hubs (a bipartite sub-graph). 71 The HITS algorithm: Grab pages ? Given a broad search query, q, HITS collects a set of pages as follows: ? ? ? It sends the query q to a search engine. It then collects t (t = 200 is used in the HITS paper) highest ranked pages. This set is called the root set W. It then grows W by including any page pointed to by a page in W and any page that points to a page in W. This gives a larger set S, base set. 72 The link graph G ? ? ? ? HITS works on the pages in S, and assigns every page in S an authority score and a hub score. Let the number of pages in S be n. We again use G = (V, E) to denote the hyperlink graph of S. We use L to denote the adjacency matrix of the graph. ?1 if (i, j ) ? E Lij = ? ?0 otherwise 73 The HITS algorithm ? ? Let the authority score of the page i be a(i), and the hub score of page i be h(i). The mutual reinforcing relationship of the two scores is represented as follows: a(i) = ? h( j) (31) ? a( j ) (32) ( j ,i )?E h(i) = (i , j )?E 74 HITS in matrix form ? ? ? We use a to denote the column vector with all the authority scores, a = (a(1), a(2), …, a(n))T, and use h to denote the column vector with all the hub scores, h = (h(1), h(2), …, h(n))T, Then, (33) a = LTh h = La (34) 75 Computation of HITS ? ? The computation of authority scores and hub scores is the same as the computation of the PageRank scores, using power iteration. If we use ak and hk to denote authority and hub vectors at the kth iteration, the iterations for generating the final solutions are 76 The algorithm 77 Relationships with co-citation and bibliographic coupling ? Recall that co-citation of pages i and j, n denoted by Cij, is T Cij = ?L ki Lkj = ( L L) ij k =1 ? ? the authority matrix (LTL) of HITS is the co-citation matrix C bibliographic coupling of two pages i and j, n denoted by Bij is B = L L = ( LLT ) , ij ? ik jk ij k =1 ? the hub matrix (LLT) of HITS is the bibliographic coupling matrix B 78 Strengths and weaknesses of HITS ? ? Strength: its ability to rank pages according to the query topic, which may be able to provide more relevant authority and hub pages. Weaknesses: ? ? ? It is easily spammed. It is in fact quite easy to influence HITS since adding out-links in one’s own page is so easy. Topic drift. Many pages in the expanded set may not be on topic. Inefficiency at query time: The query time evaluation is slow. Collecting the root set, expanding it and performing eigenvector computation are all expensive operations 79 Road map ? ? ? ? ? ? Introduction Social network analysis Co-citation and bibliographic coupling PageRank HITS Summary 80 Summary ? We introduced ? ? ? ? ? ? Social network analysis, centrality and prestige Co-citation and bibliographic coupling PageRank, which powers Google HITS Yahoo! and MSN have their own link-based algorithms as well, but not published. Important to note: Hyperlink based ranking is not the only algorithm used in search engines. In fact, it is combined with many content based factors to produce the final ranking presented to the user. 81 Summary ? Links can also be used to find communities, which are groups of content-creators or people sharing some common interests. ? ? ? ? Web communities Email communities Named entity communities Focused crawling: combining contents and links to crawl Web pages of a specific topic. ? ? Follow links and Use learning/classification to determine whether a page is on topic. 82 COSC 621 Data Science Decision Theory I Adapted from the original slides by: Professor Reza Ahmadi, UCLA, USA Slide 1 Learning Objectives Structuring the decision problem and decision trees Types of decision making environments: • Decision making under uncertainty when probabilities are not known • Decision making under risk when probabilities are known Expected Value of Perfect Information Decision Analysis with Sample Information Developing a Decision Strategy Expected Value of Sample Information Slide 2 Types Of Decision Making Environments Type 1: Decision Making under Certainty. Decision maker know for sure (that is, with certainty) outcome or consequence of every decision alternative. Type 2: Decision Making under Uncertainty. Decision maker has no information at all about various outcomes or states of nature. Type 3: Decision Making under Risk. Decision maker has some knowledge regarding probability of occurrence of each outcome or state of nature. Slide 3 Decision Trees A decision tree is a chronological representation of the decision problem. Each decision tree has two types of nodes; round nodes correspond to the states of nature while square nodes correspond to the decision alternatives. The branches leaving each round node represent the different states of nature while the branches leaving each square node represent the different decision alternatives. At the end of each limb of a tree are the payoffs attained from the series of branches making up that limb. Slide 4 Decision Making Under Uncertainty If the decision maker does not know with certainty which state of nature will occur, then he/she is said to be making decision under uncertainty. The five commonly used criteria for decision making under uncertainty are: 1. the optimistic approach (Maximax) 2. the conservative approach (Maximin) 3. the minimax regret approach (Minimax regret) 4. Equally likely (Laplace criterion) 5. Criterion of realism with ? (Hurwicz criterion) Slide 5 Optimistic Approach The optimistic approach would be used by an optimistic decision maker. The decision with the largest possible payoff is chosen. If the payoff table was in terms of costs, the decision with the lowest cost would be chosen. Slide 6 Conservative Approach The conservative approach would be used by a conservative decision maker. For each decision the minimum payoff is listed and then the decision corresponding to the maximum of these minimum payoffs is selected. (Hence, the minimum possible payoff is maximized.) If the payoff was in terms of costs, the maximum costs would be determined for each decision and then the decision corresponding to the minimum of these maximum costs is selected. (Hence, the maximum possible cost is minimized.) Slide 7 Minimax Regret Approach The minimax regret approach requires the construction of a regret table or an opportunity loss table. This is done by calculating for each state of nature the difference between each payoff and the largest payoff for that state of nature. Then, using this regret table, the maximum regret for each possible decision is listed. The decision chosen is the one corresponding to the minimum of the maximum regrets. Slide 8 Example: Marketing Strategy Consider the following problem with two decision alternatives (d1 & d2) and two states of nature S1 (Market Receptive) and S2 (Market Unfavorable) with the following payoff table representing profits ( $1000): States of Nature s1 s3 Decisions d1 20 6 d2 25 3 Slide 9 Example: Optimistic Approach An optimistic decision maker would use the optimistic approach. All we really need to do is to choose the decision that has the largest single value in the payoff table. This largest value is 25, and hence the optimal decision is d2. Maximum Decision Payoff d1 20 choose d2 d2 25 maximum Slide 10 Example: Conservative Approach A conservative decision maker would use the conservative approach. List the minimum payoff for each decision. Choose the decision with the maximum of these minimum payoffs. Minimum Decision Payoff choose d1 d1 d2 6 3 maximum Slide 11 Example: Minimax Regret Approach For the minimax regret approach, first compute a regret table by subtracting each payoff in a column from the largest payoff in that column. The resulting regret table is: s1 d1 d2 5 0 s2 0 3 Maximum 5 3 minimum Then, select the decision with minimum regret. Slide 12 Example: Equally Likely (Laplace) Criterion Equally likely, also called Laplace, criterion finds decision alternative with highest average payoff. • First calculate average payoff for every alternative. • Then pick alternative with maximum average payoff. Average for d1 = (20 + 6)/2 = 13 Average for d2 = (25 + 3)/2 = 14 Thus, d2 is selected Slide 13 Example: Criterion of Realism (Hurwicz) Often called weighted average, the criterion of realism (or Hurwicz) decision criterion is a compromise between optimistic and a pessimistic decision. • First, select coefficient of realism, ?, with a value between 0 and 1. When ? is close to 1, decision maker is optimistic about future, and when ? is close to 0, decision maker is pessimistic about future. • Payoff = ? x (maximum payoff) + (1-?) x (minimum payoff) In our example let ? = 0.8 Payoff for d1 = 0.8*20+0.2*6=17.2 Payoff for d2 = 0.8*25+0.2*3=20.6 Thus, select d2 Slide 14 Decision Making with Probabilities Expected Value Approach • If probabilistic information regarding the states of nature is available, one may use the expected Monetary value (EMV) approach (also known as Expected Value or EV). • Here the expected return for each decision is calculated by summing the products of the payoff under each state of nature and the probability of the respective state of nature occurring. • The decision yielding the best expected return is chosen. Slide 15 Expected Value of a Decision Alternative The expected value of a decision alternative is the sum of weighted payoffs for the decision alternative. The expected value (EV) of decision alternative di is defined as: N EV( d i ) = ? P( s j )Vij j =1 where: N = the number of states of nature P(sj) = the probability of state of nature sj Vij = the payoff corresponding to decision alternative di and state of nature sj Slide 16 Example: Marketing Strategy Expected Value Approach Refer to the previous problem. Assume the probability of the market being receptive is known to be 0.75. Use the expected monetary value criterion to determine the optimal decision. Slide 17 Expected Value of Perfect Information Frequently information is available that can improve the probability estimates for the states of nature. The expected value of perfect information (EVPI) is the increase in the expected profit that would result if one knew with certainty which state of nature would occur. The EVPI provides an upper bound on the expected value of any sample or survey information. Slide 18 Expected Value of Perfect Information EVPI Calculation • Step 1: Determine the optimal return corresponding to each state of nature. • Step 2: Compute the expected value of these optimal returns. • Step 3: Subtract the EV of the optimal decision from the amount determined in step (2). Slide 19 Example: Marketing Strategy Expected Value of Perfect Information Calculate the expected value for the best action for each state of nature and subtract the EV of the optimal decision. EVPI= .75(25,000) + .25(6,000) - 19,500 = $750 Another example: https://www.youtube.com/watch?v=TwG_xRj9S_0 Slide 20 Decision Analysis With Sample Information Knowledge of sample or survey information can be used to revise the probability estimates for the states of nature. Prior to obtaining this information, the probability estimates for the states of nature are called prior probabilities. With knowledge of conditional probabilities for the outcomes or indicators of the sample or survey information, these prior probabilities can be revised by employing Bayes' Theorem. The outcomes of this analysis are called posterior probabilities. Slide 21 Posterior Probabilities Posterior Probabilities Calculation • Step 1: For each state of nature, multiply the prior probability by its conditional probability for the indicator -- this gives the joint probabilities for the states and indicator. • Step 2: Sum these joint probabilities over all states -- this gives the marginal probability for the indicator. • Step 3: For each state, divide its joint probability by the marginal probability for the indicator -- this gives the posterior probability distribution. Slide 22 Expected Value of Sample Information The expected value of sample information (EVSI) is the additional expected profit possible through knowledge of the sample or survey information. EVSI Calculation • Step 1: Determine the optimal decision and its expected return for the possible outcomes of the sample using the posterior probabilities for the states of nature. • Step 2: Compute the expected value of these optimal returns. • Step 3: Subtract the EV of the optimal decision obtained without using the sample information from the amount determined in step (2). Slide 23 Efficiency of Sample Information Efficiency of sample information is the ratio of EVSI to EVPI. As the EVPI provides an upper bound for the EVSI, efficiency is always a number between 0 and 1. Slide 24 Refer to the Marketing Strategy Example It is known from past experience that of all the cases when the market was receptive, a research company predicted it in 90 percent of the cases. (In the other 10 percent, they predicted an unfavorable market). Also, of all the cases when the market proved to be unfavorable, the research company predicted it correctly in 85 percent of the cases. (In the other 15 percent of the cases, they predicted it incorrectly.) Answer the following questions based on the above information. Slide 25 Example: Marketing Strategy 1. Draw a complete probability tree. 2. Find the posterior probabilities of all states of nature. 3. Using the posterior probabilities, which plan would you recommend? 4. How much should one be willing to pay (maximum) for the research survey? That is, compute the expected value of sample information (EVSI). Slide 26
 

Option 1

Low Cost Option
Download this past answer in few clicks

12.89 USD

PURCHASE SOLUTION

Already member?


Option 2

Custom new solution created by our subject matter experts

GET A QUOTE