RSS

Tag Archives: math

Spring Semester 2011

Today is Friday (well it was about an hour ago) and that means that this week is done. This week also happens to be my first week of my spring semester. And it was a hell of a ride. My current mood right now is a combination of extreme tiredness, sleepiness, and a great deal of excitement for weeks to come. [Pretty sure that last statement is grammatically incorrect but I can’t be bothered to fix it because it conveys my feelings well].  Going into the planning stages for this semester, I sought to take on a major challenge for myself: to out-do anything I’ve done before and push myself beyond any boundaries I have previously thought was the limitation of my abilities. Obviously, classes here are really hard, but I see them as springboards rather than roadblocks. And the springboard metaphor is quite fitting: you jump on-board and you sink and sink for a bit, very well aware of your weight and limitations and how long they’re dragging you down – but that’s only for the first half. Once you reach that critical state when things start clicking, it’s all uphill from there. Suddenly your own weight is actually helping you go higher. And guess what? You’ll end up higher than you could’ve ever jumped otherwise.

The moral of the story is obviously this: Avoid falling off on the way down and it’ll be worth it on your way up.

And that’s how it was for a few of my classes last semester too. I took an upper level math class on numerical analysis and found it pretty hard at first. It was intimidating to be in a room full of people a year or more older than me and a professor who walk into the room, give a half-hi gesture, and almost immediately begin lecturing without break for the entirety of the class. Although I do enjoy math a great deal, it does in no way imply that I’m particularly good at it. A math TA from last year put it aptly when he said mathematics was the process of banging your head on a table until 1) you passed out, in which case you called it a day, or 2) you eventually made a breakthrough. As convenient and elegant as it may to think that there’s some magical state beyond which the marginal difficulty of learning the next theorem/definition/proof/algorithm falls off to some small, constant value, I really am starting to doubt that’s the case. The more I make progress in my education on the fronts of both math and computer science, I’m thinking that what instead happens is that we, either by consciously looking for it or by our minds’ own doing, start seeing the very same patterns and paradigms again and again. Of course this isn’t a new idea, but it hits you hard when you make the realization for yourself. It’s interesting because it’s almost as if my brain is developing some form of auto-complete: give it some properties of some new entity and it “predicts” some of the others that will follow. There are obviously tons of exceptions to this and that’s where the fun comes in and keeps things interesting enough to continue pursuing (although the first time I heard matrix multiplication doesn’t commute was jarring to my pristine understanding of the world). And it’s this same notion of “auto-complete”, or intuition, that gives a better grip on the weird world out there and thus provides the illusion that the marginal difficulty is indeed decreasing.

Another metaphor which I particularly like derives from the use of the term “structure” when thinking about a problem: namely, in the context of a phrase like “now after X years of research, we have a better understanding of the inner structure of [complexity class/research problem/concept/etc]…”. In my mind, I see each of these concepts not quite as black boxes, but as dark rooms, the size of which is sometimes unknown from our current perspective. And so long as Erdös was just being metaphorical about his Book, there aren’t any lighting fixtures in the room. All we are given is a bunch of little candles. In fact we do have a few matches but it’s much harder to light a candle with a match than it is to use an already lit one. And so we go about setting up tiny little candles all about this room. They each illuminate brightly things in their immediate vicinity but the falloff of light is pretty drastic at times. And sometimes different candles illuminate the same portion of the room. Ideally, we’d like to use exactly one candle, so perfectly positioned so that it lights the entire room, but finding that position is almost definitely at least NP-hard or something… The idea is that there are rooms with candles from other people, in fact all of the other people in the world. And then there’s your own room, where you have to discover which candles are lit by yourself. You don’t have to necessarily light them yourself, but you have to discover that they do indeed exist. But of course, the room is too large to light up fully. So instead, we attack a certain direction. Perhaps we like what candles so far have illuminated or perhaps we think we’re on the edge of a major push forward. In either way, we are forced to narrow down our search. It’s pretty amazing how much brighter my room has gotten in just the past few months. (Baseless prediction: the room is a actually cylindrical. Interpret that as you understand it.).

And all of these thoughts are what are following me around these days. I love a good mystery and this is exactly that. I am consistently more amazed and in a state of awe than I ever expected. And I find the intricate complexity of the surface lit thus far extremely beautiful. Although theoretical computer science has a bit less of natural feel (that is to say, closeness to the ways of nature and the universe) than mathematics, it’s still astonishing to see how things fit together. Yes, computer science is a man-made field consisting of arguably arbitrary dichotomies depending on who you ask. And yes, this field is still so very much in its infancy. But nonetheless, they reveal something deeper than the definitions that we have given them. To put it shortly, there’s still some magic left which lays undiscovered waiting for us. As frustrating as it is that we do not understand some seemingly elementary relationships between things, it’s also exactly that which gives it its charm. I was sitting in class this week, with the professor writing theorem after theorem on the board, each of which had to do in some way with P vs. NP. And I thought how much more boring the class would have been if indeed we did know the answer. Or how even more boring it would be if P \neq NP. As much as I hope it’s resolved soon, it’s the idea of not knowing which is incredible in some strange way. It keeps the magic alive and I like it.

I considered what courses I wanted to take this semester. There are lots of things I want to learn about in computer science with only time being the limitation. I decided to go forward with a bold move by taking two very difficult theory classes together. They are both on algorithms: one on the theory of algorithms taught by the great R.E. Tarjan and the other a graduate course on advanced algorithm design – specifically approximation algorithms. They are fast-moving and the latter is extremely difficult (I don’t doubt the former will soon become so too!). But I’m not getting off the springboard, no matter how tempting it may be. I will continue to push forward, on until that pivotal moment hits where things start finally making sense. I’m learning an insane amount of things every single day and it’s amazing that a lot of things which I had read about casually in the past are all suddenly coming together with a much brighter luminance. It’s hard and I anticipate lots and lots of banging heads on tables ahead, but it’ll be worthwhile. This is one of those utterly invaluable experiences that I wouldn’t give up for anything.

I started the week inspired and now I am more inspired than I recall ever being. I live in an amazingly intricate and beautiful world and all I want to do is keep lighting candles.

Advertisements
 
Leave a comment

Posted by on February 5, 2011 in Uncategorized

 

Tags: , , , , , , ,

PUMaC in < 24 Hours!

Wow, crazy. PUMaC is set to begin in under 24 hours! As I’ve mentioned before, PUMaC is Princeton University’s annual mathematics competition intended for high-schoolers (and brilliant middle-schoolers) around the country and world. It’s been an exciting journey thus far and I’ve had a great time working with the director and other core PUMaC staff to bring it all together. I haven’t seen the problems yet, but I’ve taken a peek at the Power Round test (already handed out to teams last Saturday) and it looks pretty interesting! The topic this year is in the land of graph theory: specifically, minor graphs.

According to the latest numbers, we’re expected 500+ middle-/high-school students on campus tomorrow! It’s pretty exciting recalling how much I looked forward to PUMaC, HMMT, and others when I was in high school. Regardless of how fun it is to organize everything, nothing beats actually competing. 🙂 I hope everyone has a great time tomorrow and we see some fierce competition for the top spots!

 
Leave a comment

Posted by on November 19, 2010 in Uncategorized

 

Tags: , ,

Congrats to Prof. Elon Lindenstrauss!

I just wanted to say congratulations to Professor Elon Lindenstrauss, my professor for MAT 215, Analysis of a Single Variable, at Princeton.

As of yesterday morning, he is one of the recipients of the Fields Medal for 2010 and also the first Israeli mathematician to do so!

It is truly an honor to have been able to be taught by such a great mathematician.

 
Leave a comment

Posted by on August 20, 2010 in Uncategorized

 

Tags: , , , ,

PUMaC Site is Live!

Just wanted to point out that the Princeton University Math Club site is up and running! It’s at a temporary location right now, but the content should not change much when it is migrated over to the actual domain. The current URL is: http://www.alexogier.com/pumac/wp/. Please check it out and let me know of your suggestions. Thanks!

 
Leave a comment

Posted by on July 13, 2010 in Uncategorized

 

Tags: , , , ,

A Week Off

So I’m going to be taking next week off just so I have some more time to relax and refocus. I’ve felt that I never really got a chance to properly recharge between the end of finals and the start of my summer work. As I’ve said before, I just want some time free to think. More than anything, I enjoy having the time to do nothing in particular and not feel guilty about it either. My objectives for the week? I’d rather not specify exactly, but mostly I’m going to do lots of reading, some coding, some testing of Minefield, and clearing through all those links I’ve saved for reading at a later time.

I think now is also a good time to give an update on my research as I’ve completed about half of it or so I guess. So basically, the goal is to discuss the appliability of and to design a better resource allocation algorithm in a distributed computing system. Clearly the main problem is to figure out not only how to evenly distribute available resources in order to fully utilize the capabilities of the system, but also, on a local level, run each program as fast as possible. To accomplish this, it is not good enough to just give everyone an equal share – rather, you have to give each program the combination of resources that best fit its needs. Along a related thread of thought, we see as a natural consequence of this non-trivial allocation scheme that there exists some parts of the system – some nodes – are more heavily utilized, or more “popular.” But what is it that makes it so? These are the questions we are seeking to answer. Our approach is to look at the past for clues. So, we’re digging through many years’ worth of log files of usage in order to see how the popularity of nodes has evolved in the past. We are testing out several complex models of popularity metrics and seeing how well they match the data.

There are three of us working on this project and each of us are focusing on different aspects. For my part, I’ve been focusing specifically on how to extract pertinent data from the terabytes of textual log files in acceptable speed and how to best automate the task of actually finding relationships. I’ve decided to break up my work into many small programs that do different things and are tied together by scripts that automate as much as possible. Apart from modularity being good style, it also preserves sanity since this stuff can get pretty damn hairy.

There was one specific aspect that caught my interest in particular. The problem was tactfully posed by my professor as follows: let’s say we want to see how the temperature of water in a swimming pool and the height of the dive is related to how much pain the diver feels. Holding the temperature constant, we expect higher dives to hurt more but if the height isn’t too much, the relation may be weak and depend more on how you dive, etc. But conversely, how does temperature make a difference? At first glance, there’s no appreciable trend for a small range of temperature. But cross 32 degrees F and it’s a different ballgame. We strongly expect the height of the dive to affect pain felt and for it to hurt a lot more!

In this same, consider values of two seemingly independent data fields, one being the synthetic field of popularity and the other being a more tangible entity that deals with actual hardware. We seek to automatically find the ranges of the data that yield strong correlations.

In mathematical language, we seek to automatically perform a segmented regression on the data. Our final regression may be piecewise-linear, some crazy high order interpolated polynomial, or even non-parametric; but for our needs, this is at present not our concern. Thus, I chose to only look at the correlation coefficients. But there’s more than just one type: we are all taught Pearson’s coefficient, but other popular ones include Kendall’s Tau and Spearman’s Rho. Currently I’ve just implemented Pearson’s but this will almost certainly be replaced by Kendall’s. Why? Because linearity is not as importance as dependence in general. Having decided upon this, it was time to next decide how I was going to implement this in code. The main challenge was to automatically determine the relevant breakpoints that would yield the maximum correlations. There is a good deal of literature on this but in my preliminary search I only found papers that were either very marginally related to what I was doing or just too complicated, so I decided to implement a simple algorithm on my own.

I was writing this in C and so I grumbled about how I didn’t have my usual built-in data structures as in Java (grumble grumble) but I did have a decent ArrayList and HashMap implementation I could use (I miss my TreeMaps everyday…). Basically there are two parameters we wish to maximize: the correlation and the length of the segment. This need arises since a segment of one or two points will de facto have a correlation of unity and our data is certainly non-uniform. So, I thought of some kind of learning algorithm to accomplish the task and finally ended up with a simple algorithm that runs in ~2*N time (where ~ denotes tilde notation). Basically I start with some initial set of breakpoints evenly distributed along one axis. Then, I run a loop wherein I compare correlations of adjacent blocks. If their weighted sum of correlation and number of points in the block exceeds either block individually, I remove the middle breakpoint and coalesce the blocks. I recursively do this through all the blocks and then do the exact same thing in the reverse direction.

Now I have a set of unequal length blocks. I then copy all the blocks’ information into my ArrayList and sort by a custom comparator that compares the weight of the block, defined by: X*abs(correlation)+Y*(nelem_in_block), where X and Y are some constants.

In practice, this approach works reasonably well when the number of initial breakpoints is large enough. Besides, since we are only looking for heuristics, this does the job.

Now that that’s all done, I ran some tests on the data and the results are looking close to what we expected. When I get back, I’m going to tackle the task of tweaking and reworking the popularity metric to get even finer results.

But for right now, it is time to kick back and relax. Awesome.

 
2 Comments

Posted by on July 9, 2010 in Uncategorized

 

Tags: , , , , ,