Brill, Ed (1979). Use of Optimization Models in Public-Sector Planning, Management Science, 25(5), 413-422.
Summary
In this article, Brill how multi-objective optimization models can be used for planning in the public sector. He says that multi-objective programming is descended from single-objective optimization; the result of both is to calculate "the solution". For many public sector problems, there isn't necessarily "a solution", even when using multi-objective analysis. This is because these problems are very often "wicked" problems, as described by Leibman (for more information, refer to my discussion of the article in taak een). With public sector problems, quantifying values can be very difficult. Multi-objective programming can be used for these problems in order to look at some of the possible solutions and to help encourage human creativity, but in the end there is no one "solution" for public sector problems.
Discussion
I agree with Brill that public sector problems often have a number of stakeholders who often have different valuations for the different objectives. This idea is very similar, in fact nearly identical to, the ideas presented by Leibman in the article discussed earlier in the semester. I agree that malty-objective analysis could be useful for determining possible solutions. In general, this article struck me as a restatement of the Leibman article which includes some discussion of multi-objective analysis.
Pan TC, Kao JJ (2009) GA-QP Model to Optimize Sewer System Design, Journal of Environmental Engineering, 135(1) 17-24
Summary
Efficient design of sewer systems is important since they require such a large capital investment. The complex hydraulics makes optimization of a sewer system difficult. This paper discusses using quadratic programming and a genetic algorithm as methods for finding a number of possible solutions. Since these optimizations can’t include all the issues associated with the sewer system since many of these issues are unquantifiable, the paper makes a point to say that it is vital to produce a number of good solutions via GA and QP which can then be judged against the unquantifiable constraints.
For this paper, the authors incorporated a quadratic program into their genetic algorithm. Two constraints were used—the pipes had to be large enough to meet demand, and pipe diameter must increase downstream. Comparing their results to piecewise linearization and dynamic programming, the authors found that the GA-QP method they used found more feasible solutions and were more temporally efficient.
Discussion
This was an interesting paper. The concepts herein discussed might suit sanitary sewer studies for certain scales of subdivisions. I wonder whether we could work with similar concepts to linearize the piecewise functions in our Delaware Project to implement GA-QP.
Wednesday, April 22, 2009
Monday, April 13, 2009
Assignment NINE
Shiau J.T., Wu F.C. (2006). Compromise Programming Methodology for Determining Instream Flow Under Multiobjective Water Allocation Criteria, Journal of the American Water Resource Association. 42(5), pp. 1179-1191.
Summary
In this article, the authors discuss their analysis of a weir on Kaoping Creek in Taiwan. The wier was built to serve municipal and agricultural, and currently operates under an operations function designed to meet demands while providing 9.5 cms for instream flow. Using their formulas for measuring hydraulic alteration, the authors found that the stream is nearly 70 percent hydraulically altered, a level which is potentially harmful for the aquatic species endemic to the stream. The authors then use compromise programing, similar to multiple-objective analysis, to evaluate the effects of providing different streamflows on alteration and of the municipal supply. The study resulted in contours representing a sort of pareto front.
Discussion
This seemed like an interesting paper. last semester in the seminar, Wendy from the TWDB talked about a similar program that was controlling reservoir release operations to improve downstream habitat and still meet reservoir requirements, similar to this paper.
I think that the TWDB study may be an extension of this paper, as the TWDB included the effects of stream flow variability on downstream alteration, as they found that variable flows are important for maintaining natural conditions downstream. Perhaps that would be too complicated to implement in a multiobjective analysis study like this one.
Summary
In this article, the authors discuss their analysis of a weir on Kaoping Creek in Taiwan. The wier was built to serve municipal and agricultural, and currently operates under an operations function designed to meet demands while providing 9.5 cms for instream flow. Using their formulas for measuring hydraulic alteration, the authors found that the stream is nearly 70 percent hydraulically altered, a level which is potentially harmful for the aquatic species endemic to the stream. The authors then use compromise programing, similar to multiple-objective analysis, to evaluate the effects of providing different streamflows on alteration and of the municipal supply. The study resulted in contours representing a sort of pareto front.
Discussion
This seemed like an interesting paper. last semester in the seminar, Wendy from the TWDB talked about a similar program that was controlling reservoir release operations to improve downstream habitat and still meet reservoir requirements, similar to this paper.
I think that the TWDB study may be an extension of this paper, as the TWDB included the effects of stream flow variability on downstream alteration, as they found that variable flows are important for maintaining natural conditions downstream. Perhaps that would be too complicated to implement in a multiobjective analysis study like this one.
Monday, March 30, 2009
The Ocho
Neelakantan TR, Pundarikanthan NV (2000) “Neural network-based simulation-optimization model for reservoir operation,” J. of Water Resource Planning and Management, 126(2) pp. 57-64.
Summary
Chennai (formerly known as Madras) is a major city in the Tamil Nadu region of southern India. Although it receives 51 in/yr of rainfall, this mostly comes during the three-month monsoon, so the city is prone to droughts. The city's water is supplied by aa series of reservoirs, which have been typically regulated through standard operating policy. This paper attempts to study the reservoir management through the implementation of a neural network model.
The optimisation approach used was to minimize the overall deficit index, where the ODI is the sum of the squares of the deficits of all the reservoirs, after the deficits have been normalized based on the size of demand from the different reservoirs. Then, the two scenarios ( one with the current system and one with proposed reservoirs included) were optimised to test their performance.
Optimisation was done by a neural network based optimisation, where a neural network is an algorith based on how a brain works. First, the neural network must be trained, and then optimisation is done. The authors concluded that neural-networks can optimizee the workings of large water resource systems.
Discussion
This paper was interesting in that it uses a new technique to try to solve an existing problem. Personally, I didn't have nearly enough knowledge of neural netwoks to understand most of what this paper is talking about.
My problems with (what I could understand of) this paper are:
I'm not really sure why the authors decided to force the system to maintain equity among the reservoirs. It seems like leaving this out might allow a more optimal solution. Also, I feel like the process of training sounded way too complicated to ever come into widespread use.
Summary
Chennai (formerly known as Madras) is a major city in the Tamil Nadu region of southern India. Although it receives 51 in/yr of rainfall, this mostly comes during the three-month monsoon, so the city is prone to droughts. The city's water is supplied by aa series of reservoirs, which have been typically regulated through standard operating policy. This paper attempts to study the reservoir management through the implementation of a neural network model.
The optimisation approach used was to minimize the overall deficit index, where the ODI is the sum of the squares of the deficits of all the reservoirs, after the deficits have been normalized based on the size of demand from the different reservoirs. Then, the two scenarios ( one with the current system and one with proposed reservoirs included) were optimised to test their performance.
Optimisation was done by a neural network based optimisation, where a neural network is an algorith based on how a brain works. First, the neural network must be trained, and then optimisation is done. The authors concluded that neural-networks can optimizee the workings of large water resource systems.
Discussion
This paper was interesting in that it uses a new technique to try to solve an existing problem. Personally, I didn't have nearly enough knowledge of neural netwoks to understand most of what this paper is talking about.
My problems with (what I could understand of) this paper are:
I'm not really sure why the authors decided to force the system to maintain equity among the reservoirs. It seems like leaving this out might allow a more optimal solution. Also, I feel like the process of training sounded way too complicated to ever come into widespread use.
Monday, March 9, 2009
Reading #7
Perez-Pedini C, Limbrunner JF, Vogel RM (2005) “Optimal location of infiltration-based best management practices for storm water management,” JOURNAL OF WATER RESOURCES PLANNING AND MANAGEMENT, 131(6) pp. 441-448.
Summary:
Traditionally, stormwater has been controlled systems of detention basins. However, these structures are expensive to construct, so a growing trend is to use low impact development (LID) in the form of infiltration basins to curb runoff.
The Aberjona River watershed in Massachusetts was studied. The are was divided into 120 m X 120 m squares and elevations, land use, and flow paths were found for each cell using GIS. The NRCS curve number method was used to find the infiltration and runoff for each square during a specific event. The optimisation technique was setup so that each square would be represented by a binary variable representing whether the infiltration basin will be built there; if true, the CN for that cell would be decreased by five, representing an increase in infiltration. Then a genetic algorithm was used to find those cells which had the greatest impact on reducing the runoff. The algorithm was run several times with different numbers of infiltration ponds to develop a Pareto-optimal curve for infiltration ponds vs reduction in runoff.
Discussion:
I found that this article presented an interesting problem which was solved using genetic algorithms. The techniques used seemed logical, and the results seem to make sense. The methods presented in this article should be a valuable tool for community planners hoping to use infiltration ponds for flood control.
In the future, these methods should be used on a system consisting of detention and infiltration ponds, since almost no urban stormwater management systems are going to be completely infiltration. Also, a technique that can take into account how the pond affects runoff as well as water quality might be useful.
Summary:
Traditionally, stormwater has been controlled systems of detention basins. However, these structures are expensive to construct, so a growing trend is to use low impact development (LID) in the form of infiltration basins to curb runoff.
The Aberjona River watershed in Massachusetts was studied. The are was divided into 120 m X 120 m squares and elevations, land use, and flow paths were found for each cell using GIS. The NRCS curve number method was used to find the infiltration and runoff for each square during a specific event. The optimisation technique was setup so that each square would be represented by a binary variable representing whether the infiltration basin will be built there; if true, the CN for that cell would be decreased by five, representing an increase in infiltration. Then a genetic algorithm was used to find those cells which had the greatest impact on reducing the runoff. The algorithm was run several times with different numbers of infiltration ponds to develop a Pareto-optimal curve for infiltration ponds vs reduction in runoff.
Discussion:
I found that this article presented an interesting problem which was solved using genetic algorithms. The techniques used seemed logical, and the results seem to make sense. The methods presented in this article should be a valuable tool for community planners hoping to use infiltration ponds for flood control.
In the future, these methods should be used on a system consisting of detention and infiltration ponds, since almost no urban stormwater management systems are going to be completely infiltration. Also, a technique that can take into account how the pond affects runoff as well as water quality might be useful.
Monday, March 2, 2009
Assignment #6
Behera, P, Papa, F., Adams, B (1999) “Optimization of Regional Storm-Water Management Systems” Journal of Water Resources Planning and Management, 125(2) pp. 107-114.
Summary:
In this article, the authors discuss their use of optimization techniques to calculate the required geometry for the detention ponds in a watershed on a system-wide scale in order to ensure the discharge at the outlet met quality and flow requirements while minimizing the overall cost of building all the detention basins. For each basin, the authors used decision variables representing the storage volume, depth, and release rate for each pond. Constraints included the pollution reduction and the runoff control performance. The authors used isoquant curves (developed by Papa and Adams in 1997), which show the pollution control of a detention pond as a function of the ponds storage capacity and release rate.
In order to optimize the entire system, individual detention basins were allowed to discharge water that didn't meet flood attenuation or pollution requirements, as long as the requirements were met at the outlets. This allowed them to minimize the cost of all of the detention basins since the various detention basins each had different construction and real estate costs. Using their methods, the authors were able to reduce the cost of constructing detention basins for a system containing three basins by $100K.
Discussion:
I thought this was a very well written article explaining the authors' use of optimization for a practical problem which water resource engineers are having to solve all the time. I found the isoquants to be an interesting solution to the problem of modeling the water quality.
The methods used in the paper could be a valuable tool for city governments and developers developing huge tracts of land. I wonder whether having some basins releasing higher quality water and some releasing lower quality water would be permitted by the ordinances and standards regulating discharges.
Summary:
In this article, the authors discuss their use of optimization techniques to calculate the required geometry for the detention ponds in a watershed on a system-wide scale in order to ensure the discharge at the outlet met quality and flow requirements while minimizing the overall cost of building all the detention basins. For each basin, the authors used decision variables representing the storage volume, depth, and release rate for each pond. Constraints included the pollution reduction and the runoff control performance. The authors used isoquant curves (developed by Papa and Adams in 1997), which show the pollution control of a detention pond as a function of the ponds storage capacity and release rate.
In order to optimize the entire system, individual detention basins were allowed to discharge water that didn't meet flood attenuation or pollution requirements, as long as the requirements were met at the outlets. This allowed them to minimize the cost of all of the detention basins since the various detention basins each had different construction and real estate costs. Using their methods, the authors were able to reduce the cost of constructing detention basins for a system containing three basins by $100K.
Discussion:
I thought this was a very well written article explaining the authors' use of optimization for a practical problem which water resource engineers are having to solve all the time. I found the isoquants to be an interesting solution to the problem of modeling the water quality.
The methods used in the paper could be a valuable tool for city governments and developers developing huge tracts of land. I wonder whether having some basins releasing higher quality water and some releasing lower quality water would be permitted by the ordinances and standards regulating discharges.
Monday, February 23, 2009
Assignment #5
Berry, J., Fleisher, L, Hart, W. Phillips, C. and Watson, J.-P. (2005) “Sensor Placement in Municipal Water Networks”, Journal of Water Resources Planning and Management, 131(3) pp. 237-243.
Summary:
In the article, the authors discuss their research to find the best method for placing sensors in a water distribution system network. As the threat posed by terrorism has become more evident over the last decade, the potential vulnerability of water distribution systems has become a point of concern. Sensors to test for a variety of potential contaminants are under development, but it is important to also investigate the placement of sensors to ensure that a maximum number of people are protected while minimizing cost.
The authors used integer programming to decide where to place the sensors. They called their particular method Mixed Integer Programming, but I was unable to see how it was different from normal integer programming due to their descriptionl. Their method minimized the number of unprotected people, with DV's of whether or not to place a sensor at a location.
They used their MIP method on three pipe networks: two example networks from EPANet and one real network. For each network, they found the flow patterns every six hours in a 24 hour period (4 patterns per network). Then they estimated population density served by the nodes and the risk probability for each node, and introduced noise to account for the uncertainty of the exact values. Then they ran their MIP for each model to calculate the population not covered by varying numbers of sensors. They found that, as the number of sensors increases, the number of people not protected by the sensors decreases.
Discussion:
For me, this article was confusing at times. I couldn't really understand how their programming method is different than integer programming, and their explanation of how they were using noise wasn't clear. The discussion on sensor placement sensibility, while interesting, didn't seem to add much to their final conclusions.
I feel like, if sensors were to be placed in a real network using a similar network, it would be important not to neglect time as the authors did in this paper, as it is vital when developing emergency management procedures for once the contaminant is detected. Also, the placement of sensors (industrial vs commercial vs residential) could become a contentious issue for a real network, whereas this paper glosses over that.
Summary:
In the article, the authors discuss their research to find the best method for placing sensors in a water distribution system network. As the threat posed by terrorism has become more evident over the last decade, the potential vulnerability of water distribution systems has become a point of concern. Sensors to test for a variety of potential contaminants are under development, but it is important to also investigate the placement of sensors to ensure that a maximum number of people are protected while minimizing cost.
The authors used integer programming to decide where to place the sensors. They called their particular method Mixed Integer Programming, but I was unable to see how it was different from normal integer programming due to their descriptionl. Their method minimized the number of unprotected people, with DV's of whether or not to place a sensor at a location.
They used their MIP method on three pipe networks: two example networks from EPANet and one real network. For each network, they found the flow patterns every six hours in a 24 hour period (4 patterns per network). Then they estimated population density served by the nodes and the risk probability for each node, and introduced noise to account for the uncertainty of the exact values. Then they ran their MIP for each model to calculate the population not covered by varying numbers of sensors. They found that, as the number of sensors increases, the number of people not protected by the sensors decreases.
Discussion:
For me, this article was confusing at times. I couldn't really understand how their programming method is different than integer programming, and their explanation of how they were using noise wasn't clear. The discussion on sensor placement sensibility, while interesting, didn't seem to add much to their final conclusions.
I feel like, if sensors were to be placed in a real network using a similar network, it would be important not to neglect time as the authors did in this paper, as it is vital when developing emergency management procedures for once the contaminant is detected. Also, the placement of sensors (industrial vs commercial vs residential) could become a contentious issue for a real network, whereas this paper glosses over that.
Monday, February 16, 2009
Assignment 4
Lee, B. H. and Deininger, R. A. (1992) “Optimal Locations of Monitoring Stations in Water Distribution Systems”, Journal of Environmental Engineering, 118(1) pp. 4-16.
Summary
The US EPA requires the monitoring of drinking water quality. This testing is typically done by testing stations. In the article, the authors attempt to use Linear Programming to mathematically calculate where to place the testing stations in the system so that the greatest percentage of water in the system will be tested. Their methodology is based on the fact that water at a downstream node must be of lower quality than at the node upstream where it came from. Also, the researchers used skeleton models of water distribution systems with constant flow directions.
Their LP used boolean (true/false) variables representing whether or not a testing station was at a particular node, and maximized the sum of (nodal demands)X(testing station t/f). The constraints used were related to the flow geometry and demand patterns and the number of testing stations being used.
This strategy was used for two water distribution systems, in Michigan and Connecticut, and the researchers found they were able to significantly increase the efficiency of the testing using their LP solutions.
Discussion
The article seemed to present a realistic example where a form of Linear Programming might be used on a real world problems. The methods used seemed to make sense intuitively, and the results were easy to understand. My one problem with the paper is the assumption of the flow patterns. This was clearly caused by lack of technology which forced the researchers to simplify the problem by using one (or four for the Connecticut system) flow pattern. As we all know, flow patterns can change throughout the day as demands at the different nodes change. I wonder whether using modern computers if you could calculate the coverage provided by a station at a particular node over the course of a day in order to better represent the dynamics of the system, and whether this would actually result in different solutions than those solutions the paper arrived at.
Summary
The US EPA requires the monitoring of drinking water quality. This testing is typically done by testing stations. In the article, the authors attempt to use Linear Programming to mathematically calculate where to place the testing stations in the system so that the greatest percentage of water in the system will be tested. Their methodology is based on the fact that water at a downstream node must be of lower quality than at the node upstream where it came from. Also, the researchers used skeleton models of water distribution systems with constant flow directions.
Their LP used boolean (true/false) variables representing whether or not a testing station was at a particular node, and maximized the sum of (nodal demands)X(testing station t/f). The constraints used were related to the flow geometry and demand patterns and the number of testing stations being used.
This strategy was used for two water distribution systems, in Michigan and Connecticut, and the researchers found they were able to significantly increase the efficiency of the testing using their LP solutions.
Discussion
The article seemed to present a realistic example where a form of Linear Programming might be used on a real world problems. The methods used seemed to make sense intuitively, and the results were easy to understand. My one problem with the paper is the assumption of the flow patterns. This was clearly caused by lack of technology which forced the researchers to simplify the problem by using one (or four for the Connecticut system) flow pattern. As we all know, flow patterns can change throughout the day as demands at the different nodes change. I wonder whether using modern computers if you could calculate the coverage provided by a station at a particular node over the course of a day in order to better represent the dynamics of the system, and whether this would actually result in different solutions than those solutions the paper arrived at.
Subscribe to:
Posts (Atom)