Monday, February 23, 2009

Assignment #5

Berry, J., Fleisher, L, Hart, W. Phillips, C. and Watson, J.-P. (2005) “Sensor Placement in Municipal Water Networks”, Journal of Water Resources Planning and Management, 131(3) pp. 237-243.

In the article, the authors discuss their research to find the best method for placing sensors in a water distribution system network. As the threat posed by terrorism has become more evident over the last decade, the potential vulnerability of water distribution systems has become a point of concern. Sensors to test for a variety of potential contaminants are under development, but it is important to also investigate the placement of sensors to ensure that a maximum number of people are protected while minimizing cost.

The authors used integer programming to decide where to place the sensors. They called their particular method Mixed Integer Programming, but I was unable to see how it was different from normal integer programming due to their descriptionl. Their method minimized the number of unprotected people, with DV's of whether or not to place a sensor at a location.

They used their MIP method on three pipe networks: two example networks from EPANet and one real network. For each network, they found the flow patterns every six hours in a 24 hour period (4 patterns per network). Then they estimated population density served by the nodes and the risk probability for each node, and introduced noise to account for the uncertainty of the exact values. Then they ran their MIP for each model to calculate the population not covered by varying numbers of sensors. They found that, as the number of sensors increases, the number of people not protected by the sensors decreases.

For me, this article was confusing at times. I couldn't really understand how their programming method is different than integer programming, and their explanation of how they were using noise wasn't clear. The discussion on sensor placement sensibility, while interesting, didn't seem to add much to their final conclusions.

I feel like, if sensors were to be placed in a real network using a similar network, it would be important not to neglect time as the authors did in this paper, as it is vital when developing emergency management procedures for once the contaminant is detected. Also, the placement of sensors (industrial vs commercial vs residential) could become a contentious issue for a real network, whereas this paper glosses over that.

Monday, February 16, 2009

Assignment 4

Lee, B. H. and Deininger, R. A. (1992) “Optimal Locations of Monitoring Stations in Water Distribution Systems”, Journal of Environmental Engineering, 118(1) pp. 4-16.

The US EPA requires the monitoring of drinking water quality. This testing is typically done by testing stations. In the article, the authors attempt to use Linear Programming to mathematically calculate where to place the testing stations in the system so that the greatest percentage of water in the system will be tested. Their methodology is based on the fact that water at a downstream node must be of lower quality than at the node upstream where it came from. Also, the researchers used skeleton models of water distribution systems with constant flow directions.

Their LP used boolean (true/false) variables representing whether or not a testing station was at a particular node, and maximized the sum of (nodal demands)X(testing station t/f). The constraints used were related to the flow geometry and demand patterns and the number of testing stations being used.

This strategy was used for two water distribution systems, in Michigan and Connecticut, and the researchers found they were able to significantly increase the efficiency of the testing using their LP solutions.

The article seemed to present a realistic example where a form of Linear Programming might be used on a real world problems. The methods used seemed to make sense intuitively, and the results were easy to understand. My one problem with the paper is the assumption of the flow patterns. This was clearly caused by lack of technology which forced the researchers to simplify the problem by using one (or four for the Connecticut system) flow pattern. As we all know, flow patterns can change throughout the day as demands at the different nodes change. I wonder whether using modern computers if you could calculate the coverage provided by a station at a particular node over the course of a day in order to better represent the dynamics of the system, and whether this would actually result in different solutions than those solutions the paper arrived at.

Monday, February 9, 2009

Assignment 3

Garrett Hardin, "The Tragedy of the Commons," Science, 162(1968):1243-1248.


In this paper, the author describes what he calls the “Tragedy of the Commons”. He describes commons as being finite resources shared by the public. The tragedy of the commons is that for an individual it is logical to increase the use of the commons since individual receives all the benefits whereas the cost is shared by everyone using the commons. However, when everyone using the commons follows this logic, the commons may be destroyed to the detriment of everyone using the commons. Hardin contrasts the Tragedy of the Commons to the laissez-faire principles of Adam Smith, which state that what is good for the individual will be good for society. Examples of the Tragedy from the paper include population growth, exploitation of natural resources, and pollution of the environment. The author concludes that “the morality of an act is a function of the state of the system at the time it is performed” and that historically the only way to solve the tragedy of the commons is through regulation or through transforming the commons into private property, forcing people to take responsibility for their own actions.


I found the paper to be an interesting discussion on human psychology concerning resources owned by the public. His example of livestock sharing a common field was well thought out and logical. I tend to agree with his arguments concerning pollution—things like rivers, oceans, and air are common property which cannot be made into private property, so the only way to keep them from being destroyed by the Tragedy of the Commons is through some government intervention. I also agree that, when possible, transforming the commons into private property is the most effective way of averting the tragedy of the commons. At times in the paper I felt like the paper was more about promoting the author’s own individual beliefs on population than about actual science. I felt his arguments that the number of children is comparable to a field of sheep to be weak, while his references to “ultraconservatives” and Planned Parenthood exposed the author for what he really was, an ultra-liberal trying to promote his personal beliefs on human population by connecting them to the very real “Tragedy of the Commons”.

Monday, February 2, 2009

Assignment #2

Atwood, D and S. Gorelick (1985). “Hydraulic Gradient Control for Groundwater Contaminant Removal”, Journal of Hydrology. 76(1-2), pp. 85-106.

The paper covers a research project using linear programming in order to determine the optimal techniques from removing polluted groundwater from the aquifer below Rocky Mountain Arsenal near Denver. The researchers determined that contaminant removal would be accomplished by pumping the contaminated water from the aquifer. In order to keep the contaminated plume in place, the researchers decided to use wells surrounding the contaminated plume to either pump of inject water to ensure that the hydraulic gradient would keep the plume from spreading.
The researchers first selected the best of four possible locations for the well that would actually pump the contaminated water by trial and error, assuming that that well would pump at a certain constant rate. Then, the researchers developed a linear program to determine how much water should be pumped or injected from each of the surrounding wells. The objective function used was to minimize the total amount of pumping and injecting done by adjusting pumping/injecting rates at the surrounding wells. Constraints were that the central well should pump at a certain, constant velocity while the pumping and injection pattern of the surrounding wells should result in an inward pointing gradient.
The researchers looked into two optimization techniques: global optimization, in which they calculate the optimal pumping/recharge schedule by solving just one run of their optimization schedule, and sequential optimization, in which they divided the toxin removal into 32 pumping periods and calculated the best pumping/injection for each well for each period.
The research resulted in the optimal solution of pumping and injection being selected. The paper states that global optimization resulted in a “different and better solution than the sequential strategy.” They state that this is because the global strategy is capable of looking ahead into the long term.
This paper seems significant since it is about using linear programming (which we have been studying lately in CVEN665) to solve a realistic problem. I found this paper interesting since the researchers were using things which we have been studying in CVEN 665 (linear programming) and applying them to real life problems (groundwater contamination). Although the problem being solved is quite complicated, I was able to understand theoretically what the researchers were attempting, which is helpful when learning the best ways of applying a theoretical concept such as linear programming to an actual problem.
Although not a fault, I found that the fact this paper is from 1984 could limit its practical applications in modern engineering. Many of the assumptions and formulations used were selected and justified by the intentions of making calculations easier. The authors actually discussed computation times and the computers being used several times, which, while providing insight into some of their choices may not be useful to the modern engineer. Furthermore, near the end of the paper, the authors state that the global solution is better than the sequential solutions, but technological limitations prevent them from exploring this in any depth. I think this would be a logical next step—attempting to create some sort of hybrid global-sequential strategy to further optimize the solution.