Friday, April 4, 2014

Week 12: Finale

Looking back to the semester, I regretted slacking off most of the time. I didn't practice coding as much as I would like to, and the recursion stuff still scares me.

I did terrible for test 2, so the only solution is to go through all the past exams, labs, and lectures to hopefully be comfortable enough to do the exam. Thankfully, there are solutions to the test, so I can learn from my mistakes.

I didn't particularly appreciate the idea of SLOG. It was not as enjoyable as writing codes. I'd rather get more quizzes or exercises, anything directly related to coding.

Last but not least, I'm well aware that it's my own responsibility to keep practicing. When you do bad at something and feel frustrated about it, it's harder to persevere and continue doing that. This coming study break, I'll face my fears and fix all the problems I created. I'm sure that if I try harder, I can still get 80s.

Good luck everyone!

Saturday, March 29, 2014

Week 11: Sorting

I fell ill this week, so unfortunately, I missed the Wednesday and Friday lecture. But the good news is, this week's suggested reading is incredible. (Thanks, Dan! Good choice!)

There are detailed diagrams with well elaborated explanations. There's code right under the diagram that lets you run it on the webpage itself. MC questions to make sure you understand the concept. What more to ask for?

As a minor detail, I noticed the selection sort in our course finds the minimum and insert it to the left instead of finding the largest and inserting it to the right.

When I saw O(n lg n) in the slides, I was confused as of where did lg n come from. So I started with the recommended reading. For merge sort, I found out that log n refers to the number of times you divide the list in half, where n is the length of the list. As an example, I found this from stack overflow:
"For example, starting from, say, 128 items, you divide into groups of 64, 32, 16, 8, 4, 2, 1 items -- 7 iterations (and log2(128) = 7)." (Source: http://stackoverflow.com/questions/10425506/intuitive-explanation-for-why-quicksort-is-n-log-n)
Also, n is related to the merge operation because you compare a leftlist[i] with a rightlist[j] at a time to get the smaller one, and the actual work is done where you append the smaller one in the list.

The mechanism of quick sort was harder to grasp than merge sort for me, so I'll let my mind process it a bit more before I read it again tomorrow.

Sunday, March 23, 2014

Week 10: A2's Wrath

A loving father would let his child take a glimpse of true hardship, not to let the child fear for eternity, but to arm him with tools to fight for himself.

Most UofT courses tend to serve us a tremendous amounts of deadlines near the end of the course. In these torturing weeks, I had to face assignment 2 with fatigue. Had I known how hellish this week would be, I would've certainly pushed myself to start the assignment on day 1. The logic involved in this assignment is quite difficult to grasp. Once you understand the general idea of regex, as I did after reading the handout plenty of times, your goal becomes clear. Problem is, understanding your task is only step one of the whole assignment. Anyhow, once I understood what I'm asked to do, and happily coding away, the brackets of regex dot and regex bar starts haunting me. As normal, I try to look for hints on the discussion board and from Google search. This time, Google had nothing for me because python has its own regular expressions and methods tailored to it. So my only guide was myself and the discussion board. I wished I never knew people used helper functions for is_regex. 

I had been thinking on the right track the entire time even if my solution was nowhere close to the model answer. But, I was too hung up on the idea that my idea would take forever and an easier way out is mostly likely a helper function. Countless hours had been wasted with me staring at the code and desperately finding an answer to why would I want a helper function for. Eventually, after 12 hours, I finally decided to work on my idea and got it finished half-decently. My only regret is I never went back to it and add one line of code that would address all cases, which is a simple
else:
    return False
line inside my brackets-addressing case.

I really wish to know how outstanding minds work. Recursion now strikes me as intricate ideas presented in a minimalist manner. Just like how the most amazing performers work for years for one performance, and the outcome is just a few hours of performance that some may overlook and never understand its value. But I digress... I want to know how brilliant minds filter ideas after sufficient brainstorming. Because when I got stuck, I had no will to carry on, not even for some simple trial and error. Also, there were barely any ideas in my head except one. I simply didn't understand how else I could've approached the problem. This is the first time I felt so powerless for this course, but I least I learnt to start early for all assignments. At least by starting early, my mind can process the problem subconsciously and automatically come up with ideas over time.

This assignment was another wake-up call to my habits. And hopefully I can make up my incapability to rely on my immediate wits with patience and determination... I mean... Not procrastinating and strictly enforcing it from now on.

Sunday, March 16, 2014

Week 9: Complexity

One thing I love about CS is the logic involved. The signature approach to the permutation problem was really neat. It strike me that seeing and solving a problem from another angle can ultimately be the solution you're looking for. Except, how does one come up with a different approach? Unfortunately, I do not have an exact answer for this one. Perhaps you'll eventually come up with it if you expose yourself to the problem regularly.

This week's lab, we were confused by how you can call a Node and expect the function to find the appropriate class to solve the problem. Our TA explained that it was Dynamic Binding, where the function called depends on the argument or object.

For example,

class BST(object):
...
    def count_less(self, item):
        """(BST, object) -> int
        Return the number of items in this BST that are strictly less than
        item.
        """
        return self.count_less(self.root, item)
        # The line above calls BSTNode.count_less(item) because self.root is a _BSTNode
 ...

class _BSTNode(object):
...
    def count_less(self: '_BSTNode', item: object) -> int:
        """
        Return the number of items in the BST rooted at this node that are
        strictly less than item.
        """
        if self.item < item:
            return self.count_less(self.left) + 1 + self.count_less(self.right)
        elif self.item >= item:
            return self.count_less(self.left)
        else:
            return self.count_less(item)

# Strangely enough, it calls the class addressing None Nodes at the right time as well:

class _BSTNone(_BSTNode):
...
    def count_less(self, item):
        """(_BSTNone, object) -> int
        Return the number of items in the BST rooted at this node that are
        strictly less than item.
        """
        return 0

I'm still not sure how it works, so I Googled a bit. It's called late binding as well. It refers to runtime binding, as contrast to early binding that refers to compile time binding. (Source: http://stackoverflow.com/questions/10580/what-is-early-and-late-binding) That made absolutely no sense to me, just like the Wikipedia page. So I found this website: http://www.i-programmer.info/programming/theory/1477-late-binding-myths-and-reality.html

There's a detailed explanation with an example in C# language. I decided that I should probably worry about it later and spend time to look at this week's code and maybe practice a bit instead.

Friday, March 7, 2014

Week 8: Lab on LinkedLists

This week's lab is a follow-up to last week's task. We did the part called implement list  (sort of). A similar recursion was used for LinkedList traversal. Such was mentioned in last week's blog. At some point after our lab on Tuesday, Dan added the extra section for the lab.

I did copy_ll:

  def copy_ll(self: 'LinkedList') -> 'LinkedList':
    """Return a copy of LinkedList.
    >>> lnk = LinkedList(5)
    >>> lnk.prepend(7)
    >>> lnk2 = lnk.copy_ll()
    >>> lnk is lnk2
    False
    >>> lnk == lnk2
    True
    >>> lnk.prepend(11)
    >>> lnk == lnk2
    False
    """
    if self.empty:
      return LinkedList()
    return LinkedList(self.head, self.rest.copy_ll())

I tested the docstring but it failed on the first lnk == lnk2, so I ran it manually and constantly printed lnk and lnk2 to compare them:

>>> lnk = LinkedList(5)
>>> lnk.prepend(7)
>>> lnk2 = lnk.copy_ll()
>>> lnk
LinkedList(6, LinkedList(7, LinkedList(5, LinkedList())))
>>> lnk2
LinkedList(6, LinkedList(7, LinkedList(5, LinkedList())))

They're the same, so I suspected the lack of __eq__ method is to blame. I proceeded to implement __eq__.

  def __eq__(self, other) -> bool:
    if self.empty and other.empty:
      return True
    elif self.empty or other.empty:
      return False
    return self.head == other.head and self.rest == other.rest

Then it worked. That was too much effort because I kept fixing copy_ll to pass the docstring test, and was quite late to realize the lack of __eq__ was the culprit.

Tired now. See you next week :P

Friday, February 28, 2014

Week 7: Linked Lists

When first introduced, linked lists appear to be abstract and hard to grasp. The idea of "next" or "rest" does not strike me as a straight forward concept until explained by our TA, Bryan. This is what I've come to terms with at the end of the week:

Node 0: empty; The variable to the class instance. In other words,
  >>> a = LinkedList()
Node 0 is basically a, used to call the first node, node 1, of the newly created instance of the class LinkedList. So, node 0 points to node 1.

Node 1: first element of the linked list; similar to the first element of a traditional python list. It points to node 2, i.e. the "rest" or "next" of node 1 is node 2.

Node 2:  second element of the linked list; similar to the second element of a traditional python list. It points to node 3, i.e. the "rest" or "next" of node 2 is node 3.

Consider node one as the head, the rest would be node 2, node 3, node 4, until the end because all the nodes are linked and point to their own next.

24.2 of the reading explained this similarly, but with graphical aid and shell code to illustrate it.

The methods to implement LinkedList are generally recursive. As the lab activity suggests, once you understand one code, the others tend to be similar.

The key is to start with the base case to handle None, then save rest as temp to preserve them while you change the head. Also, to traverse the list, use index value and recursively call index - 1 with the next node until index value puts you to where you want to be.

That's it for the week! Hope we all did up to our standards on our midterms!

Friday, February 21, 2014

Week 6: Trees

I've never been a morning person. Especially not on Mondays. But, ever since I switched to Comp. Sci., the life of returning to the lecture hall every Monday morning wasn't the same anymore.

But I digress...

Back on Monday, Dan covered tree terminologies, tree traversal, then followed up with code implementation the rest of the week. The tree terminologies seem simple enough. "Parent" and "siblings" makes sense to about anyone. But, the tree traversals aren't as simple as it looks.

On page 11 of week 6's slides, it says:

"Preorder: Visit the root node, do a preorder traversal of the
 left subtree, and do a preorder traversal of the right subtree

Inorder: Do an inorder traversal of the left subtree, visit the
root node, and then do an inorder traversal of the right
subtree

Postorder: do a postorder traversal of the left subtree, do a
postorder traversal of the right subtree, and visit the root node"

I thought I understood tree traversal after reading these three definitions, but once we got to the actual example, the inorder traversal was far from what I expected.

After a simple Google search, I found a website with a more detailed example and explanation for a CS beginner like me.

Here's the link: http://datastructuresnotes.blogspot.ca/2009/02/binary-tree-traversal-preorder-inorder.html


(This is a picture from the same website.)

The Inorder traversal is done recursively. Here's how the author of that blog explained it:

"(i) Traverse the left most subtree starting at the left external node, (ii) Visit the root, and (iii) Traverse the right subtree starting at the left external node.
Therefore, the Inorder traversal of the above tree will outputs:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10"
It cleared my concept quite a lot, as I originally thought you start with the left subtree and just enumerate each level is an ascending order, then visit the root, and visit the right subtree eventually. This is wrong. But, to further illustrate my wrong concept, here's an example of what my method would create using the above figure: 1, 0, 3, 2, 5, 4, 6, 7, 9, 8, 10.

Of course, don't do that!

That's all for the week. Thanks!

Friday, February 14, 2014

Week 4&5: Recursion...

Before I switched to CS, I was warned by a friend that dropped out of CS on his third year. He told me the thinking process to derive a code gradually becomes unfathomable and abstract. It would be more poetic to picture a battlefield, a scene where your elder comrade had fallen right before your eyes, warning you to take heed before you venture far into the chaos... But I digress. I took his two cents and was mentally prepared to face difficult concepts.

So here I was, dealing with recursion for seemingly the first time of my life. Both Dan and our TA expressed the importance and delicacy of recursion. I get it. It's quite peculiarly logical, but hard to grasp because of you're "defining something in terms of itself". It's no longer a one-shot logic and I feel like I'm getting to the weird part of math. Anyhow, Dan did a wonderful job showing us the ropes: tracing from the simplest parameter. I watched the Recursion lecture by Harvard University on Teaching Tree, but that didn't help me any better than Dan's lecture. Thus far, for every single recursion code CSC148 showed us, trying to trace it in one go is sometimes possible, but less effective than doing an organized, step-by-step tracing method.

For example, from the Permutations and String Length Examples shown in lecture,

def strlen(s):
  '''(str) -> int
  Return length of s.

  >>> strlen('abcd')
  4
  '''
  if s == '':
    return 0
  len_smaller = strlen(s[1:])
  return len_smaller + 1

You can trace it starting with strlen(''), the simplest case. 
Then, trace strlen('a') with the knowledge that strlen('') returns 0. 
So on with strlen('ab') then strlen('abc'). 

I can mentally trace it without plugging in values in parameters for this one by considering how recursion stops when you're left with an empty string after eliminating the first index. In other words, the very first index of a non-empty string will always be counted by the + 1 in return len_smaller + 1, then consider how strlen(s[1:]) rids the first character in the string each time and basically adds one for each character because of the last line of the code.

The latter method of tracing is way clumsier while in the end, I still indirectly traced it considering the base case strlen('') and strlen('a'). I'm assuming that for recursive codes, it'd be more efficient to be organized and start from bottom to top or top to bottom depending on the case.

Another example is the gcd code from the recursion lab. This is easier done from top to bottom when you are given csc_gcd(15,35) only.

def csc_gcd(n1: int, n2: int) -> int:
"""Return the GCD of non-negative integers n1 and n2
>>> csc_gcd(0, 0)
0
>>> csc_gcd(5, 0)
5
>>> csc_gcd(15, 35)
5
"""
# The gcd of n1 and n2 is n1 if n2 is zero, otherwise it is
# the same as the gcd of n2 and (n1 % n2)
if n2 > 0:
return csc_gcd(n2, n1 % n2)
else:
return n1















So you start from gcd(35,15) and then do csc_gcd(n2, n1 % n2) over and over again in an organized manner. This table was introduced to us by our TA Bryan. He also told us it would come in very handy during tests and exams because it is easy to help us trace, it is easy for the marker to give you full marks as well.

My bigger issue is with making a code using recursion. Honestly, I haven't gotten much practice yet. The labs felt rushed because I don't like missing anything the TA says, in case it is important like the table (that comes in really handy imo). So when I spend time to listen to the TA, I am left with less time to read the next section and actually write a code for it. He tries to compensate the lack of time by taking up codes (solutions) early, which means I never really got to think about it much before time's up. After the labs, I feel that I should redo the lab at my own time. Yet I didn't. At least not yet.

Honestly, this week had been quite hectic with family and friends. Too much trivial but time consuming things stopping me from studying and finishing up my assignments. So, hopefully I will get the time to think about binary_search(lst, s) and read the solution this coming weekend, as well as finish my A1.

Last but not least, I read two blogs today:
http://ilovecsc148.wordpress.com/ - 1/2/2014 Recursion with a bit of Intuition
http://quererna.blogspot.ca/ - 23/1/2014 Some comprehending for the introduction of Object-oriented Programming
I like the former better because it's not that easy to read the second one. I like the approach quererna has. Also, ilovecsc148 made a well-detailed tutorial and a nice description of his/her feeling towards recursion. Nice job to both of them! :)

Sunday, February 2, 2014

Third week: exceptions

The past weeks, Dan had been covering exceptions but it wasn't quite emphasized on this week's lab. The concepts seemed easy to grasp, but when I work on my exercise, it was quite challenging. Since the exercise is already past the due date, I'll post related code here.

I understand how normally, programs stops on the first error raised, but it is also true that you can mitigate that by using try: . For example:

class E1(Exception):
  pass

class E2(Exception):
  pass

def raise_errors(x):
    try:
        int(x)
    except ValueError:
        raise E1('Raise errors')
    finally:
        raise E2('because i said so')

raise_errors("now")

The finally clause makes sure the code under it is ran despite the error in try. This code gives 3 errors when you run it. I highlighted the errors below:

Traceback (most recent call last):
  File "H:\Libraries\Documents\UTM\CSC148\Exercises\E2\raise_errors.py", line 9, in raise_errors
    int(x)
ValueError: invalid literal for int() with base 10: 'now'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "H:\Libraries\Documents\UTM\CSC148\Exercises\E2\raise_errors.py", line 11, in raise_errors
    raise E1('Raise errors')
E1: Raise errors

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "H:\Libraries\Documents\UTM\CSC148\Exercises\E2\raise_errors.py", line 16, in <module>
    raise_errors("now")
  File "H:\Libraries\Documents\UTM\CSC148\Exercises\E2\raise_errors.py", line 13, in raise_errors
    raise E2('because i said so')
E2: because i said so

The reason I did this was a misinterpretation of the sentence "Your function reporter must always return normally, so it must catch all exceptions raised while it is executing and must never raise an exception itself". I thought function f is allowed more than one error since it says "catch all exceptions", and so I made this code to try forcing another error. 

I posted about this concern I have on discussion board and it appears that function f should not keep going with more errors. Rest assured, I do not need to prepare for such case when I do e2b.py.

To whoever reads my blog: what do you think about this?

Sunday, January 26, 2014

Second week: Lab madness!

As a total beginner in computer science, stacks are new to me. It wasn't so bad! There's not much highlights to mention, but for the past midterm question, I feel like there could be a less tedious way to solve it. If you have one, please share with me! I wonder if Dan allows the use of functions not covered in class this year. This is why:

During my first lab, I partnered with a new student that never took CSC108. He was friendly, but it was not easy to work with him. I remember last year's assignment requirements, where students are not supposed to use functions not covered in class. For example, itertools was helpful and shortens the code by a lot; however, it wasn't allowed because Dan wants us to learn from working with the tools we have for now. This partner I had did not know what CSC108 are accustomed to. As a result, he had to rely on using Google for in-built functions and follow his own methods. The worst part is he deleted my code because he found itertools to be shorter and simpler! Although employers won't limit you on what you use, that still makes me worry. Since collaboration is required in team projects, I learnt from this encounter to be more assertive of my own opinions and prepare for the worst. If I were to relive that day, first, I would stop him from taking all the work by explaining how lab marks are based on participation. Secondly, I would explain to him how it would be nice to practice following the course requirements. Last but not least, I would try to engage in the activity more by telling him how it was not easy to follow him and he needs to explain his method more. Most of the time, we seem to neglect the importance of switching driver and navigator. I finally realized how beneficial it can be.

For fellow SLOGers, how was your lab partner? Was he or she easy to work with? Did you learn anything from that person? Let me know!

Sunday, January 12, 2014

First week of CSC148: A fresh new beginning, everyone!

Last year's CSC108 ended with a quite challenging exam, but regardless of which, I'm here once again learning from Dan in CSC148. Although the first few weeks had been a hassle managing courses due to conflicts, I still had the pleasure to learn about object-oriented programming, stacks, and exceptions in Python.

The first week was oriented in object-oriented programming. It was not surprising to see some revision in the first lessons. The fundamental basics of constructing classes were taught last year in CSC108, but it seems that the following semester will be focused on them. Dan introduced a method of designing a new class, of which it uses noun, attributes, and operations to break down the general description of a class. Dan also took time to address how these terms, e.g. "attributes", can have different names. For instance, "attributes" can also be called "instance variables" or "object state". Albeit having a pool of names introduced at the same time is a logical and common way of teaching, it is nevertheless confusing for me. To solve this problem, I revisited my notes after class and categorized the terms. This is a copy of what I have written:

"noun/ class name
attribute/ object state/ instanced variables
operations/ object behavior/ function names"

It seems like a silly and trivial problem, yet it helped me tremendously in organizing and understanding what I've learnt. Afterall, one cannot build a house without a strong base. Other than that, everything seemed quite straightforward. As a side note, I'd suggest reading 15.7 and 15.9 from the textbook as the lecture didn't go over that.

Here's a question to other SLOGers! Did you do the exercises in the recommended readings/references? Also, did you notice people gaming during lecture? To tell you what, I'm not even surprised. Meanwhile, the closet nerd inside of me wanted to join in the Hearthstone games or Minecraft adventure...