End Semester Recap

I just finished all the exam and papers today. It has been a long day (wake up at 6) and I feel very exhausted. However, I want to do a quick recap of this semester before my judgment affected by my final grades.

Courses

CS 380D Distributed System

My first exam is a disaster. The exam is all about system design + understanding of RAFT. I didn’t get used to the system design in general. All I do is to remember every detail of some system implementations, which usually don’t matter from a design perspective. Vijay has been emphasized this point a lot but I didn’t get it until the second half of the course. The course is good and the biggest takeaway for me is two:

  • Can comfortably read distributed system paper. I cannot claim I can read all types of system paper but for distributed system paper, I begin to get the momentum and start to know where to focus on during the reading. Takes a lot of struggling to get this point but I’m happy overall after reading more than 30 papers.
  • Got intrigued by the distributed system and storage system. In the past, I have been struggling to find my research interests.  But, thanks to this course, I become more intrigued with the combination of distributed system and storage. Right now, I like storage more. I read tons of LSM-based storage paper to find a topic for my final course project. I really enjoy the moment to read LevelDB and PebblesDB’s code and enhance them in some way. That further makes me want to know more about SSDs and HDDs.

CS 388 Natural Language Processing

I trade this course with algorithm class. I have a mixed feeling right now. On one hand, unlike the NLP course that I take in the previous semester, which looks at NLP from models perspectives (HMM, CRF, different networks). this semester’s course is from more traditional linguistics + machine learning perspectives. I really like this part. Overall, I strongly believe linguistics domain knowledge should play the key role in NLP study not various deep learning manic.  First two homework, we look at language models and LSTM based on the intuition of prediction can be two ways. I really like Mooney’s view that you always think about intuition whether the model can work or not instead of mindlessly applying models.  Like last semester’s NLP class, my interests with class declines as the semester progresses partly due to the fact that the material is no longer relevant for homework and exam. That is my bad.

The final project is on VQA, which mostly done by my partner. I only gather the literature and survey the field plus some proofreading. I’m OK with that as I want to have more time working on my system project and my partner wants to work alone in the modeling.  This leads to my lesson learned from the class:

  • Graduate school is about research, not class. Pick the easiest courses and buy yourself time to work on the research problem that attracts you.

If I look back right now, I want to take algorithm class instead. My thoughts to NLP is that I want to start from the dumbass baseline and know the history of the field. If you think about NLP, the most basic technique is just regular expression pattern matching. But, how do we go from there to more complex statistical models is the most interesting point I want to learn.

LIN380M Semantics I

The course is taught by Hans Kamp, which I believe invents the Discourse Representation Theory (DRS). Really nice man. I learn the predicate logic, typed lambda calculus, Montague grammar and DRS. Very good course for the logic-based approach to derive the semantic meaning of a sentence. However, I do feel people in this field put a lot of efforts in handling rule-based exceptions like how do we handle type clash in Montague grammar. When I turn in the final exam, Hans is reading some research paper. He is still doing research and that inspires me a lot.

Other Lesson Learned

  • “Don’t be afraid to fail, be afraid not to try”. I learn a lot from my final system project partner. Reading complex code can be daunting but we can always start to play around even when we cannot understand the code fully. There is a great deal of psychological barrier to be overcome. My partner always starts with reading and then writing. Once bug happens, he is happy because the bug is an indicator of progress, which eventually leads to working code.
  • Work independently. When I got stuck for a while, I always want to seek help instead of counting on myself to solve the problem. It seems that I can never trust myself ability to solve the problem. By observing how my project partner solves the problem, I learn a lot. Start to trying and always seek for the root cause of the problem and situation changes as long as you start trying.
  • Some tips about system paper writing:
    • Use hatch on the bar graph. People may print out their paper in black and white. Use hatches on the bar graph help them to distinguish which bar is your system and which bar is the baseline system.
    • Add more descriptions to each figure and table below. I used to think that there should be only one line of description for each picture. But, as pointed out by my another project partner, people need instructions when read the graphs. People love the pictures and they hate to go to the paragraphs to search for the instructions to understand the graph. Thus, put instructions directly below the picture. Great insight!
  • I really want to know how to measure a system accurately.  From my system project, I realize that measuring the system performance is really hard. Numbers fluctuate crazily and you have no clue why is that because there are some many layers of abstraction  & factors in the experiment environment that can potentially impact the system measurement. I really want to know more about this area during my own study and summer internship.
  • System improvement without provable theoretical guarantees will be very unlikely successful. Overhead or the constant factor hidden in the big-O model usually dominate the actual improvement you might think you can get. For example, there are overhead in spawning threads. We need to compare how much we can get by having multiple threads running in parallel to do the subtask vs. having one single thread do the whole thing. PebblesDB’s paper on the guard and improvement to compaction ultimately prove that we really need to think more before getting our hands dirty. By reading the paper, I get the feeling that they know the system will work before even implementing one because they can clearly show that their functionality works before writing a single line of code. I need to develop more sense about this and taking more theory class.

 

Ok. Time to pack and catch the flight.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s