It has been a while since I have updated this blog. As all of my final exams are in the first week of the exam schedual, this has put an increased work load on me. I have to start studying now it seems, in addition to the tasks I already have to do.
I got my midterm back a few weeks ago. Looking over it, my performance was sastifcatory; however I didn't get an A; which is a shame because this midterm was a bit of a give away (relative to the other courses I am doing). I dont see any marks to appeal, after looking over the midterm, so I will live with it.
Assignment II was sneaky! Questions 2, 3 and 4 did not appear to be too difficult. Question 2 was just a truncated, floored logarithm base 7. Question 3 was a bit strange, but doable. Question 4 was from the class notes, and tedious.
Question1 however. had a glaring problem: it was very easy to miss duplicate trees. As the recursive definition generated sets of all trees with n nodes (and as a basic propery of set theory: only one instance of an element in a set), it was very easy to overcount when calculating F(n).
One would naturally try to resolve this by constructiong an F'(n), which subtracts duplicates from there F(n). While your at it, you can show Z^n != X^n + Y^n for n >=3 using elementary mathematical concepts only.
Seriously though, finding F'(n) is probably possible, but just very difficult and obscure. Thankfully, I was pointed in the right direction: use a recursive double.
Surprisingly, I also spend a fair amount of time on Q4. This was because, one had to first figure out if it was a flawed algorithm. Of course it isnt, however the algorithm can fail if the machine that executes it has poor floating point architecture.
For example, it can be shown that for: sqrt(k(k+1)), that the fractional part of this number must be <= 0.5. As n -> infinite, on a finite machine with approximated floating point numbers, you can actually get the fractional part to be 0.5....since the algorithm converts the root number to integer...this will round up.
Keeping this in mind, this is how you can make the algorithm fail:
1) take k to be an incredibly large integer.
2) use a set of inputs, (A, x, b, e) for bs1. let b = k, e = k+1, and x be at position k+1. Let x be at position k+1
Then, when the midpoint is calulated, the floating point roudning error will produce k + 0.5, which rounds to k+1.
When the computer compares A[m] >= x, A[m] = x clearly. This resutls in:
return bs1(x,A,b,m).
but e = m. Therefore bs1(x,A,b,m) = bs1(x,A,b,e). So an infinite cycle is produced. With a big enough k (or a cheapo FPU), one can break this algorithm.
Of course, this is not the algorithms fault; its a hardware problem. In trying to find flaws in the algorithm, this is a fun thing I realized.
END.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment