One thing I like about the Mathematical Modeling class is that I always learn something new.
I was familiar with the 3/5ths compromise, but I didn’t realize until this semester that it was enacted in 1787, before the first US Census in 1790. I thought that got hammered out in the 1800s when slavery became more of an issue. I didn’t realize the abolitionist movement had as much force in the USA that early.
Today I worked with a student who was having strange problems with one of our models on his dataset (I think it was Austin, TX). It is not finding a true minimum; if he changes the initial conditions at all, he finds a different set of parameters that does a good job of minimizing the data. Mathematically I recognize the problem. If you think of finding the minimum as finding the lowest point in a bowl, and the bowl is very flat at the bottom, you have a hard time finding the exact lowest point; the entire flat area looks like it is good enough. Algorithms searching for a minimum often do something along the lines of heading downhill looking for the lowest point. If they get into a relatively flat section, they can just walk around; everything is close to the minimum. There are a lot of choices of parameter values that do a good job.
I know that can happen, but never really considered that it could happen in this project. I wonder how many times it has happened without the student noticing? I don’t think this is the first time we’ve used the Austin, TX dataset.