The logic of decision pdf


















The two latter findings about the ubiquity of games of chance and betting and the popularity of insurance contracts show a distinct deviation from what Kahneman and Tversky claim and gives our reformulation more justification.

Winning or losing is not a predicate for a situation that can be unambiguously granted. It is always a matter of reconstruction of the situation involved. Rather than to regard a situation as winning or losing situation, one might focus better on the original status initial wealth of the test person. If a person earns 1, per month or 5, it does affect the decision; it might influence it much more than the various amounts that are at stake; similarly, if a person has great debts already with no perspective to fulfil them, this person might be willing to accept risks that other persons would find completely irrational; indeed that is the basis for some gamblers.

The preceding considerations also indicate that it is difficult to separate between the perception of impact and probability of future outcomes. Personal criteria come to the fore. The personality is a further factor that influences decisions under uncertainty; it is possible that women — on the average — have a distinct risk profile from men. All these considerations reveal how difficult it is to weight the evidence rationally. Gigerenzer investigates the effect of gut decisions on the quality of decisions: first, gut decisions are popular; second, they are not necessarily worse than careful analysis of all givens; third, it might be good to support quicker decisions by checklists about what to consider.

A mixture between analysis and gut decisions might be superior to pure analytic decisions not only because of the undue delay in decision by careful analysis including collecting all the necessary information. A final example about personal criteria refers to the three-door problem with the two goats and the car behind the doors where the candidate can freely choose one. After the candidate has made the choice, the moderator opens one of the remaining doors to show a goat and offers the option to change the first choice, i.

However, with all the explanations, people — in the majority — stubbornly remain with their first choice even if they are fully capable to understand the solution. The discussion is signified by fierce emotional expressions so that one may ask for the reason behind.

They do not compare risks in the sense of 3 or weighted risk in the sense of 4 ; no, they simply compare the impact of their decision. And the impact of losing by their action and being responsible for this is by far the worst they can imagine.

Therefore, they defend their first choice against all odds. Idiosyncratic criteria, gut feelings, fear of high loss in the worst case, eager, responsibility for what one does as compared to bad luck always overlays a rational approach; it starts with hindering to separate the impact from the small probability of occurrence; it biases the perception of the small probabilities of adverse events; and it increases the factual impact of the adverse event and focuses on the worst case letting any weighting seem irrational and unfeasible.

The Logic of Repeated Decisions and One-Off Decisions In a statistical variant of experiments 1 and 2, a candidate has to decide repeatedly 1, times between the presented options. Instead of performing the experiment, we just tell what we would do and give a rationale for it. What would the reader decide in the situations presented in Table 5? Again, utility may play a role but as the payments are so small, more or less the amount counts this argument might not apply if the single amounts were higher.

We decide for the risky option b2 in the winning situation of experiment 1s and for option b3 in the losing situation of experiment 2s. Statistical variant of experiments 1 and 2: 1, games have to be played; each requires a decision between the two given options To win an amount of more than 1, in the first experiment, we need to win in more than single situations.

We have a probability of at least 0. For the losing situation now the results are just reversed: we have a probability of nearly 1 to lose more than 1, and we lose 1, with a probability of at least 0.

It makes a difference in terms of the success of the strategy used if one has a one-off decision or decides similar cases repeatedly. What is good in a one-off decision may be bad for the repeated decision. In other words, to speak with an illustrative context: if one sells a house one time, the relevant criteria differ from those if one sells houses every week like a professional agent; this is not because of the impersonal impact but because of doing it repeatedly; of course, a professional agent has additional criteria for decisions, too.

Three Paradigmatic Examples of Risk In the following three examples, we present the genuine character of situations under uncertainty and discuss various useful strategies. The criteria used are not always defensible and they have their relative merits and disadvantages.

Of course, the decision based on a specific criterion will depend on it and possibly switch if the criterion is changed. If probabilities are involved in the decisions, their values can change even in the same problem for the various stakeholders involved. There is no unique view of a problem; rationality depends also on the position of a stakeholder in a decision.

Example 1. Insurance — Exchange Money for Certainty We will embed matters in a decision with two stakeholders that exploit different types of information for their decision. In any insurance contract, two partners mutually exchange money and the status of uncertainty. For a full-coverage insurance for the car for one year , the insurance company gives up its position of certainty no loss and offers to pay the potential costs from an accident while the client pays a certain amount the premium in advance in order to leave the situation of uncertainty of an accident and reach a status of certainty over the financial cost of an accident.

A similar situation occurs in any bet in the state lottery, bets in sports, or with option contracts in the financial market. Borovcnik If both parties apply the same principles then the question arises how can the contract be advantageous for both — it seems like a paradoxical situation. However, both stakeholders have their own viewpoint and use different criteria that fit their circumstances better. The insurance company has many contracts and thus can use the frequencies of accidents from past statistics to estimate the underlying probabilities.

Since the company accumulates assets, it is free of utility considerations. The client, however, is a unique person with different habits driving skills, driving regions, exposition to special risks, etc. Furthermore, the client does not have large financial assets so that utility considerations of money become relevant, e.

It is interesting that states normally have no insurance for their cars. They have a larger fleet of cars and they have a different financial background. We discuss a crude model for the potential future of one year considering only a total wreckage or no accident below Borovcnik ; see Table 6. A crude model for the insurance contract The insurance company may base its model on money and an estimate of the probability for the damages and related payments by past frequencies of events.

The car owner has to find his or her personal probabilities and take utility of money into consideration. Without utility, a so-called break- even point can avoid the difficult process of eliciting the personal probabilities. The break-even here is for odds of 1 : 29 for the total wreckage as in this case the decisions for the insurance or against it each have the same cost, namely 1, so that the person not considering utility would be indifferent between the two actions.

If the model considers also smaller accidents parking damages, e. Example 2. Using Probabilities for Optimization of a Decision under Uncertainty We illustrate how a person willing to model uncertain future outcomes by probabilities may optimize a current decision. Suppose the demand D for a journal is uncertain.

We model it — for reasons of simplicity — by the following discrete probabilities pi. How many units should be printed if you have a choice to print 1, 2, Demand di 1, 2, 3, 4, 5, Probabilities pi 0,40 0,30 0,20 0,06 0,04 Copies aj 1, 2, 3, 4, 5, Cost C aj 2, 2, 2, 2, 2, Table 7.

Probabilities for demand for the journal and cost of copies Calculation of the Expected Profit of a Single Decision. The cost is determined by the decision but the income remains subject to randomness.

We might calculate its expected value in the sense of 4 in order to judge the present decision. We get a monetary value but cannot interpret it in isolation; we will have to calculate the expected profit also for the other possible actions and then compare the values.

It is important to note that an action cannot be judged in isolation: how could we interpret the expected profit of , or a maximum of loss of —? A rational judgement requires the comparison of alternatives. Thus, we repeat the analysis for alternative numbers of copies 1, to 5, and arrange the results of computations in a combined matrix of net profits below Table 9.

Comparing Different Decisions. We compare our options with respect to the number of copies according to the criterion of expected profit rather than money we could also use utility of money. Matrix of net profit depending on the decisions and the actual demand The second column of Table 9 contains the random variable profit for the option of 2, copies, which was already displayed in Table 8.

Other entries are calculated analogously. The probabilities for the entries in the second row are derived from those for the demand they are contained in the outmost right column. According to the options we compare five different distributions for the profit; we could write them in separate tables and draw their bar graphs.

As the probabilities are the same, we write the distributions compactly in the profit matrix. This reminds us to a principle of avoiding a sure loss which is a basic principle in SJT.

This reflects a basic property of improving decisions under risk. Rarely can one find decisions, which are better throughout whatever will happen , but it is easy to rule out inferior decisions. However, the remaining admissible actions cannot be compared to each other without a further criterion and what is the better decision depends on the criterion used.

To improve a situation in one respect to have a higher expected net profit is accompanied by the risk of higher potential losses. One may even speak of an invariant in human life as seen from a general philosophical perspective on risk. The option 3,, which yields an expected profit of , is better. However, it bears the risk of a loss of — if demand is only 1,, which has a probability of 0. It turns out that option 3, yields the maximum expected profit and is — in the present model — the best decision.

The decision depends on the criterion used. If the criterion for the decision would be to minimize the maximum possible loss, then option 1, copies would be the best but this is not a feasible option at all.

A minimax principle minimizing the maximal possible loss thus may lead to nonsensical decisions. Again, a general comparison is that fearing the maximum loss will often end up with a poor decision. If probabilities are modelled for the demand for copies of the journal e. Generally, this leads to better solutions but bears the risk of somewhat higher losses. In this way the Bayesian view would use risk in the sense of 4. The distribution of the demand is usually the result of validating a prior distribution on the demand by empirical data via the Bayesian formula and has a SJT character.

Instead of basing the calculations on money, we could also attribute a utility to the different amounts of money. Example 3. Medical Diagnosis Medical diagnosis is a natural context to introduce conditional probabilities and reflect on the possible errors of making decisions. The question is whether the result of the diagnosing variable can be used for diagnosing the disease or not.

Data from an investigation in the disease D and a diagnosing method to detect D. We imagine that each of the persons corresponds now to a ball that has two markers, one for the status of the disease and the other for the result of the diagnosis. We put all the balls into an urn and draw from it randomly. We get the following probabilities, which are easily read off Table 10 9 a. We can speak of the probability of a positive diagnosis to be 0. Its complementary probability is linked to an error of the diagnosis; here we have an error rate of 0.

The context illustrates that diagnosing for the disease or for the absence of the disease is linked to risk. We can make two errors in deciding that a positive person has the disease and a negative person does not have the disease D.

We can wrongly classify a person as to have the disease first row, second entry and we can wrongly classify a person not to have the disease second row, first entry. The size of the errors, i.

Usually, the prevalence is estimated and the sensitivity and specificity are checked by a laboratory study of cases which are classified by other methods so that we certainly know of these persons their status of D.

Elementary Approaches to Probability and Risk Probabilities, especially conditional probabilities are difficult to perceive and hard to estimate and interpret in a specific context.

Also, such probabilities reflect that we explore a real situation only via models that need not fit to it. There are interesting proposals to simplify the probabilistic data and to visualize them so that the information is more readable. His research team has also investigated, which visualization is better to obtain and understand the solution, which is usually derived from calculations that are rarely understood and are based on the Bayesian formula.

Suppose the data for the screening scheme for the detection of breast cancer are the following Table 11; Gigerenzer, : Of women between 40 and 50, 0. From those who have breast cancer, In a probabilistic framework, a woman is randomly selected from the population and undergoes the mammogram.

After she has a positive result, the question is what is the conditional probability to have cancer of the breast. The answer can be obtained by the Bayesian formula, which is quite formal. Gigerenzer suggests transforming all conditional probabilities to natural frequencies expected values for a group of 1, or more persons.

Table Transforming conditional probabilities to natural frequencies by expected numbers in a statistical village We round off the value of The other data can just be filled by the usual side constraints.

From the table of natural frequencies we can calculate any probability related to 7 the diagnosis. Remarkably, the positive diagnosis has an unexpectedly low conditional probability for the disease, because of its low prevalence. In fact, Gigerenzer suggests that the expected numbers are arranged in a tree diagram instead of two-way tables as was done here.

In his icon array he systematically compares the expected numbers of two scenarios, one with the screening scheme applied and the other without over a longer period see Figure 1. This allows judging the benefit of the screening as compared to a decision for not undergoing the screening regime. Data in Table 12 display the expected numbers and illustrate the effect of PSA screening and digital rectal examination; numbers are for men aged 50 years or older, not participating vs.

Expected numbers of diverse outcomes in a ten year screening scheme applied to 1, men as compared to the outcome of no screening exact figures are slightly smoothed here 1, men without screening: 1, men with screening: Figure 1. Icon array to illustrate the expected outcome for the prostate screening system as compared to no screening from Spiegelhalter, In the usual format one has to learn to read the information and the formalism.

Many formats in scientific communication hide more than they clarify the inherent information. Thus, there is an urgent need for elementary approaches. However, there are also disadvantages: The basic assumption in the approach of natural numbers is that people are all alike. On the one side there is a psychological barrier to accept such an averaging view.

On the other side, there are not 1, men who are like me as is suggested in the icon array. The data and the derived results when enhanced by the visual representation appear as factual and true. The key point is that whole numbers or visual representations of them can appear to give more validity than is appropriate as models and imprecise information are used. Both the probability of prostate cancer the prevalence and the sensitivity and specifity of the diagnosing procedures used in the screening regime are crude estimates for the underlying probabilities.

Odds for a six on a die are 1 : 5 and for a head on a coin, the odds are 1 : 1 fifty-fifty. In the diagnosing example from above we have prior odds for the disease of 8 : or roughly 1 : The advantage of this calculus with probabilities is that we can identify the low prior odds of the prevalence to cause the surprisingly low probability to have breast cancer after a positive result.

Evaluating and Sharing Risks between Different Stakeholders Apart from personal risks, one may differentiate risks to the field where they arise gambling, investments, etc. In the latter case we might speak of societal risk. Climate change is a classic example of societal risk. Risks in medicine are harder to classify as there are single patients who are at risk but there is also an institutional side involved. The institutional side has its own goals which may differ from that of each individual.

Risks Shared between Stakeholders of Different Levels For example, in public health there may be a discussion whether children should be vaccinated against measles or not. The health system is concerned about public order and is responsible to prevent an epidemic outbreak of the disease. It can build a frequentist scenario and weigh the different possibilities by probabilities and calculate the risks involved in wide-spread vaccination possibly with a duty to partake and reduced vaccination no public financial support, no advertisement and counselling of parents.

The individual parents have their subjective probabilities own experience with the disease, strength of immune system, alternative ways to support health and cure in case of a disease, etc. They cannot rely on the frequentist probability of the public health system, and the impact is hard to judge: the possible impact of the disease and the possibility of a bad consequence of the vaccination itself. While the parents stay with the personal consequences in either case, the doctor can be made liable for not informing about the possibility of the vaccination or its negative side-effects.

The pharma industries are a further stakeholder in this case, which has completely different financial interests and goals. For the different situations of the stakeholders in case of the recent HPV vaccination that is intended to protect from cervical cancer see Borovcnik and Kapadia a. Medical doctors enhance and increase their role by warning on health risks.

With these warnings they are always on the safe side. Societal Risks — Standards of Science and Society In technological settings, the evaluation of risks has become a driving force to develop related concepts. And, there are no absolutely safe technologies so that the key question becomes whether we can reasonably decrease the risk.

When risk considerations are applied to complex systems climate, ecosystems, world economy, etc. Whether climate change really establishes a risk remains doubtful for many people.

The probabilities for the various developments are hard to assess and may be changed by time, technology, and economics. Many attempts are undertaken to simulate models scenarios on the basis of assumptions with the well-known effects of flooding, increase of hurricanes, etc. The models resemble scenarios what happens if …? This example reveals a key problem in presenting scientific results to the public.

Standards that are viable within research may not be applicable for that generalization especially as the stakeholders have their own probabilities and are affected by the consequences completely differently. And there are dynamic developments involved that cannot be influenced further. For example, once there is a public decision for the energy use of atomic power, there are industries involved in the business and make their profit while on the other hand the public has to balance if an accident happens as in Fukushima , with the potential harm on health but also financially to cope with the damage.

The calculated risks are only a support for finding a common decision but never can be used in the way that decisions are executed. Apart from the risk as it is per se, there is a perception of risk that is not based on well-calibrated subjective probabilities but more or less on intuitive short-cuts of such a risk calculation, which is based on beliefs and gut feelings. When scientific results should be applied to other areas, especially to public decisions in society, we have to clarify the standards of evidence.

In sciences, one essential strategy is to find hypotheses of specific relations between variables by rejecting a null hypothesis that there are no such specific relations and the type I error of falsely rejecting the null becomes vital. What if this type II error is very small? Then it might be very easy for an alternative hypothesis of special relations to get acknowledged by rejecting the null. We have dealt with these types of errors in a separate section earlier as they form part of the inner-mathematical approach towards risk within hypothesis testing.

Here an example illustrates matters. Notes and References 4. Propositional Attitudes 4. Belief and Desire 4. Justifying the Special Addition Law 4. Remarks on Fairness 4. Desirability 4. Sentences and Propositions 4. Notation 4. Belief versus Assent 4.

Problems 4. References and Solutions 5. Preference 5. Computing Probabilities 5. The Propositions T and F 5. A Remark on Computing Probabilities 5. Computing Desirabilities 5. The Probability and Desirability Axioms 5.

Preference between News Items 5. Acts as Propositions 5. Desirabilities Determine Probabilities 5. Problems 5. Notes and References 6. Equivalence, Perspectives, Quantization 6. Zero and Unit 6. Bounds on Desirabilities 6. Bounds on c 6. Perspective Transformations of Desirability 6.

Probability Quantization 6. Problems 6. Acknowledgment 7. From Preference to Probability 7. Determining Ratios of Probabilities 7. A Probability Scale for Indifferent Propositions 7. Nullity 7. A General Technique 7. However in the final analysis, you must make the decision based on how you feel.

You must trust yourself. The book tells you how to use your intuition to help make key decisions at work and in your personal life. It also outlines steps you can take to develop your present intuitive ability further, and how you can join or establish an 'intuition network' worldwide to promote the use of this skill in your own organization. In recent years there has been considerable interest in emotional intelligence.

Drawing upon a rich theoretical and philosophical tradition, the author explains the concept and process of emotional production and how this works in gratifying, aversive and hierarchical situations as well as irreversible situations and situations of failure and success.



0コメント

  • 1000 / 1000