Wednesday, April 22, 2009

het taak tien (en misschien laatste)

Brill, Ed (1979). Use of Optimization Models in Public-Sector Planning, Management Science, 25(5), 413-422.

Summary
In this article, Brill how multi-objective optimization models can be used for planning in the public sector. He says that multi-objective programming is descended from single-objective optimization; the result of both is to calculate "the solution". For many public sector problems, there isn't necessarily "a solution", even when using multi-objective analysis. This is because these problems are very often "wicked" problems, as described by Leibman (for more information, refer to my discussion of the article in taak een). With public sector problems, quantifying values can be very difficult. Multi-objective programming can be used for these problems in order to look at some of the possible solutions and to help encourage human creativity, but in the end there is no one "solution" for public sector problems.

Discussion
I agree with Brill that public sector problems often have a number of stakeholders who often have different valuations for the different objectives. This idea is very similar, in fact nearly identical to, the ideas presented by Leibman in the article discussed earlier in the semester. I agree that malty-objective analysis could be useful for determining possible solutions. In general, this article struck me as a restatement of the Leibman article which includes some discussion of multi-objective analysis.


Pan TC, Kao JJ (2009) GA-QP Model to Optimize Sewer System Design, Journal of Environmental Engineering, 135(1) 17-24

Summary
Efficient design of sewer systems is important since they require such a large capital investment. The complex hydraulics makes optimization of a sewer system difficult. This paper discusses using quadratic programming and a genetic algorithm as methods for finding a number of possible solutions. Since these optimizations can’t include all the issues associated with the sewer system since many of these issues are unquantifiable, the paper makes a point to say that it is vital to produce a number of good solutions via GA and QP which can then be judged against the unquantifiable constraints.
For this paper, the authors incorporated a quadratic program into their genetic algorithm. Two constraints were used—the pipes had to be large enough to meet demand, and pipe diameter must increase downstream. Comparing their results to piecewise linearization and dynamic programming, the authors found that the GA-QP method they used found more feasible solutions and were more temporally efficient.

Discussion
This was an interesting paper. The concepts herein discussed might suit sanitary sewer studies for certain scales of subdivisions. I wonder whether we could work with similar concepts to linearize the piecewise functions in our Delaware Project to implement GA-QP.

Monday, April 13, 2009

Assignment NINE

Shiau J.T., Wu F.C. (2006). Compromise Programming Methodology for Determining Instream Flow Under Multiobjective Water Allocation Criteria, Journal of the American Water Resource Association. 42(5), pp. 1179-1191.

Summary
In this article, the authors discuss their analysis of a weir on Kaoping Creek in Taiwan. The wier was built to serve municipal and agricultural, and currently operates under an operations function designed to meet demands while providing 9.5 cms for instream flow. Using their formulas for measuring hydraulic alteration, the authors found that the stream is nearly 70 percent hydraulically altered, a level which is potentially harmful for the aquatic species endemic to the stream. The authors then use compromise programing, similar to multiple-objective analysis, to evaluate the effects of providing different streamflows on alteration and of the municipal supply. The study resulted in contours representing a sort of pareto front.


Discussion
This seemed like an interesting paper. last semester in the seminar, Wendy from the TWDB talked about a similar program that was controlling reservoir release operations to improve downstream habitat and still meet reservoir requirements, similar to this paper.

I think that the TWDB study may be an extension of this paper, as the TWDB included the effects of stream flow variability on downstream alteration, as they found that variable flows are important for maintaining natural conditions downstream. Perhaps that would be too complicated to implement in a multiobjective analysis study like this one.

Monday, March 30, 2009

The Ocho

Neelakantan TR, Pundarikanthan NV (2000) “Neural network-based simulation-optimization model for reservoir operation,” J. of Water Resource Planning and Management, 126(2) pp. 57-64.

Summary
Chennai (formerly known as Madras) is a major city in the Tamil Nadu region of southern India. Although it receives 51 in/yr of rainfall, this mostly comes during the three-month monsoon, so the city is prone to droughts. The city's water is supplied by aa series of reservoirs, which have been typically regulated through standard operating policy. This paper attempts to study the reservoir management through the implementation of a neural network model.

The optimisation approach used was to minimize the overall deficit index, where the ODI is the sum of the squares of the deficits of all the reservoirs, after the deficits have been normalized based on the size of demand from the different reservoirs. Then, the two scenarios ( one with the current system and one with proposed reservoirs included) were optimised to test their performance.

Optimisation was done by a neural network based optimisation, where a neural network is an algorith based on how a brain works. First, the neural network must be trained, and then optimisation is done. The authors concluded that neural-networks can optimizee the workings of large water resource systems.


Discussion

This paper was interesting in that it uses a new technique to try to solve an existing problem. Personally, I didn't have nearly enough knowledge of neural netwoks to understand most of what this paper is talking about.

My problems with (what I could understand of) this paper are:
I'm not really sure why the authors decided to force the system to maintain equity among the reservoirs. It seems like leaving this out might allow a more optimal solution. Also, I feel like the process of training sounded way too complicated to ever come into widespread use.

Monday, March 9, 2009

Reading #7

Perez-Pedini C, Limbrunner JF, Vogel RM (2005) “Optimal location of infiltration-based best management practices for storm water management,” JOURNAL OF WATER RESOURCES PLANNING AND MANAGEMENT, 131(6) pp. 441-448.

Summary:
Traditionally, stormwater has been controlled systems of detention basins. However, these structures are expensive to construct, so a growing trend is to use low impact development (LID) in the form of infiltration basins to curb runoff.

The Aberjona River watershed in Massachusetts was studied. The are was divided into 120 m X 120 m squares and elevations, land use, and flow paths were found for each cell using GIS. The NRCS curve number method was used to find the infiltration and runoff for each square during a specific event. The optimisation technique was setup so that each square would be represented by a binary variable representing whether the infiltration basin will be built there; if true, the CN for that cell would be decreased by five, representing an increase in infiltration. Then a genetic algorithm was used to find those cells which had the greatest impact on reducing the runoff. The algorithm was run several times with different numbers of infiltration ponds to develop a Pareto-optimal curve for infiltration ponds vs reduction in runoff.


Discussion:
I found that this article presented an interesting problem which was solved using genetic algorithms. The techniques used seemed logical, and the results seem to make sense. The methods presented in this article should be a valuable tool for community planners hoping to use infiltration ponds for flood control.

In the future, these methods should be used on a system consisting of detention and infiltration ponds, since almost no urban stormwater management systems are going to be completely infiltration. Also, a technique that can take into account how the pond affects runoff as well as water quality might be useful.

Monday, March 2, 2009

Assignment #6

Behera, P, Papa, F., Adams, B (1999) “Optimization of Regional Storm-Water Management Systems” Journal of Water Resources Planning and Management, 125(2) pp. 107-114.

Summary:
In this article, the authors discuss their use of optimization techniques to calculate the required geometry for the detention ponds in a watershed on a system-wide scale in order to ensure the discharge at the outlet met quality and flow requirements while minimizing the overall cost of building all the detention basins. For each basin, the authors used decision variables representing the storage volume, depth, and release rate for each pond. Constraints included the pollution reduction and the runoff control performance. The authors used isoquant curves (developed by Papa and Adams in 1997), which show the pollution control of a detention pond as a function of the ponds storage capacity and release rate.

In order to optimize the entire system, individual detention basins were allowed to discharge water that didn't meet flood attenuation or pollution requirements, as long as the requirements were met at the outlets. This allowed them to minimize the cost of all of the detention basins since the various detention basins each had different construction and real estate costs. Using their methods, the authors were able to reduce the cost of constructing detention basins for a system containing three basins by $100K.

Discussion:
I thought this was a very well written article explaining the authors' use of optimization for a practical problem which water resource engineers are having to solve all the time. I found the isoquants to be an interesting solution to the problem of modeling the water quality.

The methods used in the paper could be a valuable tool for city governments and developers developing huge tracts of land. I wonder whether having some basins releasing higher quality water and some releasing lower quality water would be permitted by the ordinances and standards regulating discharges.

Monday, February 23, 2009

Assignment #5

Berry, J., Fleisher, L, Hart, W. Phillips, C. and Watson, J.-P. (2005) “Sensor Placement in Municipal Water Networks”, Journal of Water Resources Planning and Management, 131(3) pp. 237-243.

Summary:
In the article, the authors discuss their research to find the best method for placing sensors in a water distribution system network. As the threat posed by terrorism has become more evident over the last decade, the potential vulnerability of water distribution systems has become a point of concern. Sensors to test for a variety of potential contaminants are under development, but it is important to also investigate the placement of sensors to ensure that a maximum number of people are protected while minimizing cost.

The authors used integer programming to decide where to place the sensors. They called their particular method Mixed Integer Programming, but I was unable to see how it was different from normal integer programming due to their descriptionl. Their method minimized the number of unprotected people, with DV's of whether or not to place a sensor at a location.

They used their MIP method on three pipe networks: two example networks from EPANet and one real network. For each network, they found the flow patterns every six hours in a 24 hour period (4 patterns per network). Then they estimated population density served by the nodes and the risk probability for each node, and introduced noise to account for the uncertainty of the exact values. Then they ran their MIP for each model to calculate the population not covered by varying numbers of sensors. They found that, as the number of sensors increases, the number of people not protected by the sensors decreases.

Discussion:
For me, this article was confusing at times. I couldn't really understand how their programming method is different than integer programming, and their explanation of how they were using noise wasn't clear. The discussion on sensor placement sensibility, while interesting, didn't seem to add much to their final conclusions.

I feel like, if sensors were to be placed in a real network using a similar network, it would be important not to neglect time as the authors did in this paper, as it is vital when developing emergency management procedures for once the contaminant is detected. Also, the placement of sensors (industrial vs commercial vs residential) could become a contentious issue for a real network, whereas this paper glosses over that.

Monday, February 16, 2009

Assignment 4

Lee, B. H. and Deininger, R. A. (1992) “Optimal Locations of Monitoring Stations in Water Distribution Systems”, Journal of Environmental Engineering, 118(1) pp. 4-16.

Summary
The US EPA requires the monitoring of drinking water quality. This testing is typically done by testing stations. In the article, the authors attempt to use Linear Programming to mathematically calculate where to place the testing stations in the system so that the greatest percentage of water in the system will be tested. Their methodology is based on the fact that water at a downstream node must be of lower quality than at the node upstream where it came from. Also, the researchers used skeleton models of water distribution systems with constant flow directions.

Their LP used boolean (true/false) variables representing whether or not a testing station was at a particular node, and maximized the sum of (nodal demands)X(testing station t/f). The constraints used were related to the flow geometry and demand patterns and the number of testing stations being used.

This strategy was used for two water distribution systems, in Michigan and Connecticut, and the researchers found they were able to significantly increase the efficiency of the testing using their LP solutions.

Discussion
The article seemed to present a realistic example where a form of Linear Programming might be used on a real world problems. The methods used seemed to make sense intuitively, and the results were easy to understand. My one problem with the paper is the assumption of the flow patterns. This was clearly caused by lack of technology which forced the researchers to simplify the problem by using one (or four for the Connecticut system) flow pattern. As we all know, flow patterns can change throughout the day as demands at the different nodes change. I wonder whether using modern computers if you could calculate the coverage provided by a station at a particular node over the course of a day in order to better represent the dynamics of the system, and whether this would actually result in different solutions than those solutions the paper arrived at.

Monday, February 9, 2009

Assignment 3

Garrett Hardin, "The Tragedy of the Commons," Science, 162(1968):1243-1248.

Summary

In this paper, the author describes what he calls the “Tragedy of the Commons”. He describes commons as being finite resources shared by the public. The tragedy of the commons is that for an individual it is logical to increase the use of the commons since individual receives all the benefits whereas the cost is shared by everyone using the commons. However, when everyone using the commons follows this logic, the commons may be destroyed to the detriment of everyone using the commons. Hardin contrasts the Tragedy of the Commons to the laissez-faire principles of Adam Smith, which state that what is good for the individual will be good for society. Examples of the Tragedy from the paper include population growth, exploitation of natural resources, and pollution of the environment. The author concludes that “the morality of an act is a function of the state of the system at the time it is performed” and that historically the only way to solve the tragedy of the commons is through regulation or through transforming the commons into private property, forcing people to take responsibility for their own actions.

Discussion

I found the paper to be an interesting discussion on human psychology concerning resources owned by the public. His example of livestock sharing a common field was well thought out and logical. I tend to agree with his arguments concerning pollution—things like rivers, oceans, and air are common property which cannot be made into private property, so the only way to keep them from being destroyed by the Tragedy of the Commons is through some government intervention. I also agree that, when possible, transforming the commons into private property is the most effective way of averting the tragedy of the commons. At times in the paper I felt like the paper was more about promoting the author’s own individual beliefs on population than about actual science. I felt his arguments that the number of children is comparable to a field of sheep to be weak, while his references to “ultraconservatives” and Planned Parenthood exposed the author for what he really was, an ultra-liberal trying to promote his personal beliefs on human population by connecting them to the very real “Tragedy of the Commons”.

Monday, February 2, 2009

Assignment #2

Atwood, D and S. Gorelick (1985). “Hydraulic Gradient Control for Groundwater Contaminant Removal”, Journal of Hydrology. 76(1-2), pp. 85-106.

Summary:
The paper covers a research project using linear programming in order to determine the optimal techniques from removing polluted groundwater from the aquifer below Rocky Mountain Arsenal near Denver. The researchers determined that contaminant removal would be accomplished by pumping the contaminated water from the aquifer. In order to keep the contaminated plume in place, the researchers decided to use wells surrounding the contaminated plume to either pump of inject water to ensure that the hydraulic gradient would keep the plume from spreading.
The researchers first selected the best of four possible locations for the well that would actually pump the contaminated water by trial and error, assuming that that well would pump at a certain constant rate. Then, the researchers developed a linear program to determine how much water should be pumped or injected from each of the surrounding wells. The objective function used was to minimize the total amount of pumping and injecting done by adjusting pumping/injecting rates at the surrounding wells. Constraints were that the central well should pump at a certain, constant velocity while the pumping and injection pattern of the surrounding wells should result in an inward pointing gradient.
The researchers looked into two optimization techniques: global optimization, in which they calculate the optimal pumping/recharge schedule by solving just one run of their optimization schedule, and sequential optimization, in which they divided the toxin removal into 32 pumping periods and calculated the best pumping/injection for each well for each period.
The research resulted in the optimal solution of pumping and injection being selected. The paper states that global optimization resulted in a “different and better solution than the sequential strategy.” They state that this is because the global strategy is capable of looking ahead into the long term.
Discussion:
This paper seems significant since it is about using linear programming (which we have been studying lately in CVEN665) to solve a realistic problem. I found this paper interesting since the researchers were using things which we have been studying in CVEN 665 (linear programming) and applying them to real life problems (groundwater contamination). Although the problem being solved is quite complicated, I was able to understand theoretically what the researchers were attempting, which is helpful when learning the best ways of applying a theoretical concept such as linear programming to an actual problem.
Although not a fault, I found that the fact this paper is from 1984 could limit its practical applications in modern engineering. Many of the assumptions and formulations used were selected and justified by the intentions of making calculations easier. The authors actually discussed computation times and the computers being used several times, which, while providing insight into some of their choices may not be useful to the modern engineer. Furthermore, near the end of the paper, the authors state that the global solution is better than the sequential solutions, but technological limitations prevent them from exploring this in any depth. I think this would be a logical next step—attempting to create some sort of hybrid global-sequential strategy to further optimize the solution.

Monday, January 26, 2009

Hwk #1

Liebman, Jon (1976) “Some simple-minded observations on the role of optimization in public systems decision making,” Interfaces 6(4) pp. 102-108.

Summary:
At the time when this article was written, optimization models using linear programming to calculate the best solution to a problem were being widely used in a number of fields—notably in the public sector and in improving firefighting strategies. In firefighting, models had been created to calculate where fire trucks and fire stations should be placed, and these models had been shown to increase the effectiveness of firefighting operations. In the public sector, particularly river basin quality management, the author could only find one example where the optimization models had been used with any success.
According to the author, those problems which had successfully been solved using optimization models had several common traits: they were problems of increasing efficiency, the goals of the model as well as its constraints were obvious, and since it was the private sector making the decisions, there was only one stakeholder. As the linear programming methods have become more complicated, the problems these models are used to solve have become more complicated, particularly those tied to the public sector. Liebman describes these problems as “wicked problems.” Common characteristics of the wicked problems include highly interconnected systems with a very large number of stakeholders, as well as complicated problems where the results of certain actions are unknown.
Liebman says that when solving these complicated models, the role of the model changes. Instead of calculating the most efficient solution, models are now used to calculate a number of different solutions whose purpose is to aid the decision-maker in maker the final decision.
Discussion:
Liebman showed how the methods to using models undergo profound shifts when dealing with complicated problems versus with simplistic problems. His four suggestions contained near the end of the paper were interesting, but I found his first two to be the most insightful; the gist of these two suggestions being that a complicated model is actually the organized thinking process of an individual, and therefore there are many possible models for a single problem.
I feel like further research along this same line could be to show what type of models the different stakeholders of a public problem will develop. Since, according to Liebman, more models help the decision-maker, having models representing a wide array of perspectives in addition to scientists and engineers is vital.



Ostfield, Avi and Salomons, Elad. (2004). "Optimal Layout of Early Warning Detection Stations for Water Distribution Systems Security.” Journal of Water Resources Planning and Management. 130(5), 377-385.

Summary:
Since September 11, the danger posed by evildoers to public water distribution systems has been a point of concern to the EPA and public utilities. The EPA has been funding increased security for water distribution systems, promoting information sharing between the various institutions, and encouraging improvements to the detection and treatment methods used by the local water utilities. The ability to monitor water quality within a distribution system is of particular concern, since early warning of contamination can provide valuable time for the utility to implement life-saving counter-actions.

In their paper, Ostfield and Salomons attempt to improve the way the early warning systems are laid out. In a best-case scenario, water quality would be monitored at every node in a distribution; however available technology makes this cost-prohibitive. In conjunction with the monitors, chlorine boosters are placed in a system, and the optimal locations of these boosters may be calculated using linear modeling. There are several linear programming models in existence create a binary matrix which calculate contamination at each node over time, calculating each node as a potential source. These models have several shortcomings: they only consider steady state conditions and they do not consider residence time. Also, since these models assume that water upstream of an acceptable node will be acceptable, they encourage placing monitors on the edges of the distribution system, which means that contaminants within the system may not be detected as soon.

Building off of these methods, Ostfield and Salomons developed a linear program model which can calculate pollution at nodes using a similar method to the old approach, except that their model allows simultaneous contamination from multiple nodes, and their model uses a complicated algorithm to calculate the evolution of biological contaminants. Using their model for two simulation water distribution systems, the researchers were able to calculate the level of service versus the number of monitoring stations.

Discussion:
I felt that the method used by the researchers could provide some interesting insight into monitoring water quality. I particularly thought that their including multiple contamination sources as well an algorithm for the movement and evolution of biological contaminants could be useful.

I feel that their research, while it may be useful in finding an optimal number of monitoring stations, does not address the problem of placement of these stations. Once the number of monitoring stations has been found using the methods detailed in this paper, further research could be done to develop a method for the optimal placement of the stations throughout the system.

Friday, January 23, 2009

The Beginnings (aka hwk #0 for cven 655)

About Me: I am a graduate student at Texas A&M University. I am working on my Masters of Engineering in Water Resource Engineering, which I am hoping to finish this summer. After that, off to the real world.

I am doing this blog for CVEN 665-- Water Resource Systems Analysis. I am taking this course because I think its important that, after all these classes in which we studied the individual components of water resource systems in depth (e.g. pipe flow, open channel flow, stormwater), to take a class which will bring all this information together; to study the system as a whole so that, as an engineer, I can answer that all-important question-- "how efficiently (cheaply) can I build it?".


"Education’s purpose is to replace an empty mind with an open one." -- Malcolm Forbes (father of publisher/conservative thinker/presidential candidate Steve Forbes).

What is critical thinking? Critical thinking, to me, is the process of taking an idea or a problem and breaking it down and examining and evaluating its components using science, logic, and comparisons from your own experiences in order to evaluate the problem or idea and develop a reasonable response.

That's it for today. Check back on Monday for my reviews of two delightful articles (Hwk#1).