?
Decision making under uncertainty
The area of choice under uncertainty represents the heart of decision theory. Known from the 17th century (Blaise Pascal invoked it in his famous wager, which is contained in his Pensées, published in 1670), the idea of expected value is that, when faced with a number of actions, each of which could give rise to more than one possible outcome with different probabilities, the rational procedure is to identify all possible outcomes, determine their values (positive or negative) and the probabilities that will result from each course of action, and multiply the two to give an “expected value”, or the average expectation for an outcome; the action to be chosen should be the one that gives rise to the highest total expected value. In 1738, Daniel Bernoulli published an influential paper entitled Exposition of a New Theory on the Measurement of Risk, in which he uses the St. Petersburg paradox to show that expected value theory must be normatively wrong. He gives an example in which a Dutch merchant is trying to decide whether to insure a cargo being sent from Amsterdam to St Petersburg in winter. In his solution, he defines a utility function and computes expected utility rather than expected financial value.
In the 20th century, interest was reignited by Abraham Wald’s 1939 paper[8] pointing out that the two central procedures of sampling-distribution-based statistical-theory, namely hypothesis testing and parameter estimation, are special cases of the general decision problem. Wald’s paper renewed and synthesized many concepts of statistical theory, including loss functions, risk functions, admissible decision rules, antecedent distributions, Bayesian procedures, and minimax procedures. The phrase “decision theory” itself was used in 1950 by E. L. Lehmann.
The revival of subjective probability theory, from the work of Frank Ramsey, Bruno de Finetti, Leonard Savage and others, extended the scope of expected utility theory to situations where subjective probabilities can be used. At the time, von Neumann and Morgenstern theory of expected utility[10] proved that expected utility maximization followed from basic postulates about rational behavior.
The work of Maurice Allais and Daniel Ellsberg showed that human behavior has systematic and sometimes important departures from expected-utility maximization. The prospect theory of Daniel Kahneman and Amos Tversky renewed the empirical study of economic behavior with less emphasis on rationality presuppositions. Kahneman and Tversky found three regularities – in actual human decision-making, “losses loom larger than gains”; persons focus more on changes in their utility-states than they focus on absolute utilities; and the estimation of subjective probabilities is severely biased by anchoring.
PERT
PERT is an acronym for Program (Project) Evaluation and Review Technique, in which planning, scheduling, organising, coordinating and controlling of uncertain activities take place. The technique studies and represents the tasks undertaken to complete a project, to identify the least time for completing a task and the minimum time required to complete the whole project. It was developed in the late 1950s. It is aimed to reduce the time and cost of the project.
PERT uses time as a variable which represents the planned resource application along with performance specification. In this technique, first of all, the project is divided into activities and events. After that proper sequence is ascertained, and a network is constructed. After that time needed in each activity is calculated and the critical path (longest path connecting all the events) is determined. PERT uses time as a variable which represents the planned resource application along with performance specification. In this technique, first of all, the project is divided into activities and events. After that proper sequence is ascertained, and a network is constructed. After that time needed in each activity is calculated and the critical path (longest path connecting all the events) is determined.
ERT was developed by the U.S. Navy in the 1950s to help coordinate the thousands of contractors it had working on myriad projects. While PERT was originally a manual process, today there are computerized PERT systems that enable project charts to be created quickly.
The only real weakness of the PERT process is that the time required for completion of each task is very subjective and sometimes no better than a wild guess. Frequent progress updates help refine the project timeline once it gets underway.
CPM
Developed in the late 1950’s, Critical Path Method or CPM is an algorithm used for planning, scheduling, coordination and control of activities in a project. Here, it is assumed that the activity duration are fixed and certain. CPM is used to compute the earliest and latest possible start time for each activity.
The process differentiates the critical and non-critical activities to reduce the time and avoid the queue generation in the process. The reason behind the identification of critical activities is that, if any activity is delayed, it will cause the whole process to suffer. That is why it is named as Critical Path Method.
In this method, first of all, a list is prepared consisting of all the activities needed to complete a project, followed by the computation of time required to complete each activity. After that, the dependency between the activities is determined. Here, ‘path’ is defined as a sequence of activities in a network. The critical path is the path with the highest length.
- JPSC Mains Tests and Notes Program
- JPSC Prelims Exam 2024- Test Series and Notes Program
- JPSC Prelims and Mains Tests Series and Notes Program
- JPSC Detailed Complete Prelims Notes