addressalign-toparrow-leftarrow-rightbackbellblockcalendarcameraccwcheckchevron-downchevron-leftchevron-rightchevron-small-downchevron-small-leftchevron-small-rightchevron-small-upchevron-upcircle-with-checkcircle-with-crosscircle-with-pluscrossdots-three-verticaleditemptyheartexporteye-with-lineeyefacebookfolderfullheartglobegmailgooglegroupshelp-with-circleimageimagesinstagramlinklocation-pinm-swarmSearchmailmessagesminusmoremuplabelShape 3 + Rectangle 1ShapeoutlookpersonJoin Group on CardStartprice-ribbonShapeShapeShapeShapeImported LayersImported LayersImported Layersshieldstartickettrashtriangle-downtriangle-uptwitteruserwarningyahoo

Thoughts on exercises for SICP 1.2

From: Paul S.
Sent on: Saturday, March 15, 2014 11:22 PM
Where section 1.1 was mostly about getting familiar with the basic syntax
of Scheme, 1.2 begins to work toward one of Abelson and Sussman's primary
goals: to start building the student's ability to form a link between the
structure of a program (in their language, a 'procedure') and the pattern
of computation activity that results (the 'process').

If you haven't studied formal computer science before (or, at a minimum,
if you don't already have a basic familiarity with Big-O notation), some of
this section could be a bit of rough going.

The Big-Theta notatation used in the text, i.e., 𝚹(k), is similar to big-O
notation, but implies that the bound is both an upper and a lower bound.
For Big-O, on the other hand, just implies that an algorithm grows /at least/
as fast as parenthetical expression following the "O".

The following is just an informal discussion intended to point you in the
right direction...going into the full formal description isn't necessary to
grok the essence of what's going on in the text.  If you've got more
questions, bring 'em to the discussion group.

Consider the following functions:

 (define (a n)
   n)

 (define (b n)
   (if (= n 0)
       0
       (+ n (b (dec n)))))

 (define (c n)
   (if (= c 0)
       0
       (+ (b n) (c (dec n)))))


Function 'a' is a classic example of an O(1) procedure: no matter what the
size of its input 'n', it will always take a constant amount of time to
run.  In terms of big-O notation, b=O(1).

Function 'b' is a linear recursion...for any 'n', there will be 'n' further
recursive calls to 'b', for a total of n+1 calls (including the original).
The advantage of big-O-style notation, is that you sweep the lower-order
terms under the rug, effectively, and focus of the terms that will end up
"dominating" the runtime as 'n' increases.

In terms of big-O notation, b=O(n).  But (and here is the reason for the
big-Theta), is is equally valid to say that b=O(log n)...or even b=O(1)!
Effectively, b grows "as fast or faster than" the expression following the
big-O.

But, we couldn't say that b=O(n^2), because n^2 grows strictly "faster
than" 'b'.

In terms of big-Theta notation, though, O(n) is a "tight" bound for
'b'...that is, 'b' grows "neither faster nor slower" than 'n'.

Looking at function 'c', we can see that there is a call to 'b' (a
Theta(n) function) for every value of 'n' as 'n' is decremented to 0, which
happens 'n' times.  The full recurrence relation is a bit much to go into,
but, in general, the number of operations that 'c' has to perform is,
roughly, on the order the square of the value of 'n'...so, it is both
O(n^2) (it grows at least as fast as n^2) and Theta(n^2) (it grows at least
as fast as n^2, but no faster, ignoring lower-order terms). 

'c' is also O(1), O(log n), O(n), and O(n log n), as all of the expressions
following the O grow slower than 'n^2'.  But, 'c' is only Theta(n^2).

This is hard going if you haven't seen the notation before (and I'm not
sure this description will be helpful), but try not to get too hung up on
it. If this is your first exposure, just try to follow the general gist of
the arguments in the text, and generally understand how many more
operations have to be performed as the input to a procedure becomes larger.

Exercise 1.10, in particular, I found to be a bit problematic...I was
looking for a tidy, algebraic description for 'h'...there's not really a
standard notation for the resulting order of growth, so don't spend too
much time trying to figure it out if you're running short on time.

Exercise 1.9, on the other hand, is straightforward and important. There
are going to be a bunch more exercises along this line where you need to
understand the difference between recursive and iterative processes, so
it's worth making sure that you understand this.  Exercise 1.11 is one of
the first of the exercises where you'll need to rewrite a procedure to
produce on sort of process or the other.

Exercise 1.12 is straightfoward and worth working on, but don't get bogged
down with 1.13, which is an inductive mathematical proof!

Exercise 1.14 demonstrates why this book isn't really a good introduction
to big-O and big-Theta notation! Drawing the tree is useful, but the text
doesn't give you a lot of resources to work with for analyzing the order of
growth.

Exercises[masked] are useful, and one of the best places to focus your
energy. 1.19 is a bit rougher, and relies on a bit more mathematical
insight. Don't lose heart, as the *subject* of these exercises is not a
much the point is as their goal of getting you to start writing procedures.
Just move on...if I were teaching this course, I wouldn't assign 1.19
unless I knew that the student had already studied this sort of proof in
another mathematics course.

Exercise 1.20 is one of the few that jumps back to applicative order vs.
normal order. Don't spend too much time with it.

Exercise 1.21 is easy, just getting you to use the procedure defined in the
text.  1.22, on the other hand, is extremely important: all this discussion
of big-O and big-Theta is the theoretical counterpart to the practical
issue of how long it takes your program to run! 1.22 gets you experimenting
with actually measuring runtime.  If you've read Robert Sedgewick's
_Algorithms_ book, he's a really big proponent of this: actually *measure*
how long it takes your programs to run. Theoretical analysis is important,
but developing the skills to run experiments on your own code is an
important step forward in your ability as a computer science.  If you have
time, doing this sort of analysis on any of the other exercises can be
informative you are at all curious.

The last batch of exercises [masked]) all build on each other, and
sitting down for a few hours to explore them is the other best place to
spend your time.  Some of them (like the one on Carmichael numbers) have a
bit of a Project Euler flavor, but they approach similar problems from
different angles.  Changing your code around to make the variants work can
be a pain...and that is one of the main points! By going through these
different variants, you'll be preparing for the progressive abstractions
that will be built in the next section, section 1.3.

Paul

Our Sponsors

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy