Friday, October 29, 2010

7.3-7.5, Due on November 1

I am interested in how cryptography answers a variety of problems that go beyond just disguising messages. For example, I was intrigued by section 7.3 in regards to making sure Alice doesn't change her message and Bob doesn't read the message until after an event as occurred. In section 7.4 the authors claim that there is a situation involving elliptical curves where the decision Diffie-Hellman problem is solvable quickly which I am interested in. I recall getting a teaser in an earlier section about elliptical curves as well.
The hardest part for me was understanding the ElGamel cryptosystem. I understand the big idea because I understand public key cryptography (not completely, but mostly) but I don't see how the public and and private keys are derived in regards to the discrete logs problem. The RSA was easy because the private key was just two large primes, but I'm not sure what the private keys are in the ElGamel system.
P.S. My high school German teacher was named Mrs. El Gamel and she told me 'El Gamel' was Arabic for 'The Camel.'

Wednesday, October 27, 2010

7.2, Due on October 29

I enyoed reading about Baby Step, Giant Step (section 7.2.2). Not only did it remind me of Buzz Aldrin's famous quote, but it also reminded me of the meet-in-the-middle attacks from the earlier sections. I also liked reading this section in light of the material about factoring. Knowing that the past two weeks have been spent learning how to factor in order to break weak RSA systems helped me understand why we are learning all this information about discrete log problems. I'm guessing that the discrete logarithm problems are going to be used in a public key cryptosystem and all this information we are learning is going to help us crack weak versions of that system.
The two hardest subsections for me were 7.2.1 about the Pohlig-Hellman Algorithm and 7.2.3 about the Index Calculus. The examples were quite long and despite finally understanding the log notation I had difficulty understanding it in light of the examples.

Tuesday, October 26, 2010

6.5-6.7 and 7.1, Due on October 27

The most interesting portion of these sections for me was the idea of trapdoors. I liked reading about the variety of trapdoors available and how they differ according to the public key system. I also enjoyed reading about other types of public key cryptosystems besides the RSA. It's easy to avoid thinking outside the box but this section helped me see some other alternatives to the RSA.
One thing I didn't really understand was the discrete logs in section 7.1. I am familiar with the idea of logarithms but I don't understand the x=L[subalpha](Beta) notation. I also don't really see how the discrete logs apply to anything so far. I imagine their application will become more apparent in the ensuing sections. The other hard section for me was 6.6- An Application to Treaty Verification. I don't quite know what all is going on there.

Sunday, October 24, 2010

6.4.1, Due on October 24

I guess the thing I am most confused about is how we factor the numbers into smaller primes. The example of the basic principle on page 183 starts by saying 9398^2 = 5^5 * 19 (mod 3837523) but I don't really see how they got that or how I would factor an even bigger number. I believe, upon finishing the chapter, that they give the explanation on page 185 but I didn't quite see how it all related. I'm not sure where this equation of in+sqrt(in) + j^2 came from or how it helps us factor n into smaller primes.
The most interesting part of this section for me was the idea of using matrices. I enjoy linear algebra and I loved my Matrix Analysis class so I'm pretty eager to see additional applications of matrices. This matrix formed in this section must serve a purpose, although I have not successfully identified it yet. I understand how the matrix is formed, but I don't see where the linear independence comes in, or why we move to mod 2.

Thursday, October 21, 2010

6.4, Part I, Due on October 22

Fermat's Factorization was the most interesting portion of this quick section. It is so simple but very powerful for small numbers and it still took a genius to come up with it. I like it because I can understand it and it uses the idea of working backwards to factor (relatively) large numbers.
The p-1 Factoring Algorithm was the only other topic discussed, and it wasn't crystal clear to me. One question I have is how B is chosen. It's only description is that it is a bound but I'm not sure what it is bounding. This method looks a lot like the Miller-Rabin Primality Test  except it actually gives us factors instead of just determining if a number is prime or composite.

Monday, October 18, 2010

6.3, Due on October 20

This whole section was interesting to me and I enjoyed learning about ways to get around factoring numbers. I found the methods to be very intelligent solutions to a very difficult problem. It's wise of mathematicians to realize that it's a lot easier to see if a number is composite then to see if it's prime, and since a number can't be both you only need to check for one or the other condition. One thing I like about mathematics is that it uses efficient methods to solve difficult problems and this is definitely one such case. I also find it interestesting that as important as primes are we still have no way to determine if a very large number is prime. Mathematics is definitely a living field with intriguing problems still existing today.
The actual primality tests were difficult to follow. Fermat's test was easier to understand (at least the suppositions) but the latter two tests were much more difficult to read and follow. Their preceding arguments were bolstered by examples but I so not understand what exactly is going one with these tests.

Friday, October 15, 2010

3.10, Due on October 18

I enjoy this idea of square roots in modular arithmetic. Sometimes when I get a little confused in modular arithmetic I just remind myself what modular arithmetic actually means, and when I put the equation into 'regular' arithmetic it makes more sense. One question I have about the Legendre and Jacobi symbols is regarding their relationships to fractions. Is the fraction notation used just as an arbitrary symbol or is there a connection between the modular arithmetic and fractions? I'm guessing there is a connection but I haven't made it yet.
I became a little confused with the examples. One difficulty for me was that they kept referring to the steps of the Theorem on page 91 in the examples, and I kept having to flip between pages to understand the reasoning. While it may sound trivial, having to flip the pages over and over again made the example much harder to follow. In class we have the advantage of writing the theorem down on one board and doing an example on the other so that they may be seen simultaneously.

Thursday, October 14, 2010

3,9, Due on October 15

I like algebra and I like modular arithmetic so I enjoyed this section. But, I didn't exactly follow all that transpired and I have a lot of questions. For example, the Proposition on page 86 requires p to be congruent to 3 mod 4, but I am wondering how often this if going to happen. Is this a frequent characteristics of primes? I'm also wondering how they got that (p+1)/4=3 in the example on page 87. I see that p=11 but I'm not sure why they broke it up in that way.
I enjoyed the last half of page 87 where the authors talk about breaking up the modular to get two different congruences, but I don't see it as too helpful since most of our mods in this section seem to be prime numbers. What is the connection there? I can see where this is going- factoring n- and I like it. I'm guessing that in certain instances of the RSA n is going to be factorable.

John Friedlander Colloquim

I found John Friedlander's talk on prime numbers to be very interesting and directly in line with what we have been covering in Cryptography. I was especially intrigued by the twin prime pairs and the question regarding if they are infinite or not. Like most  people I like problems whose statements are easy to understand but less simple to prove. Knowing the different names or classifications of the primes also reinforced to me their importance, such as the Mersenne and Fermat Primes. One connection Mr. Friedlander made that I had not noticed previously was the one with the different steps of the Euclidean Algorithm and the gcd in each step. When he wrote (91,41)=(49,42)=(42,7)=7 the link between the Euclidean Algorithm and the gcds made much more sense. The most confusing part for me was the Sieve of Erathosthenes. In the limited time spent on in I didn't catch exactly how it worked. I did like the Legendre Formula though, even if I didn't fully understand it.

Tuesday, October 12, 2010

6.2, Due on October 13

One interesting thing for me in this section was just how smart some people are. I mean really, the attacks they have on this system are mostly Algebra based and yet they are so complex and specific that it is mind boggling how they came up with them. It was also interesting to see how the continued fractions played a part in these attacks. I thought that section was pretty random but now I see how it fits in.
What does "In time polynomial in log n" mean (page 170)? I've seen similar wording a few times in the book. Except for the low exponent attack which made some sense, I didn't understand the other attacks. I did, however, understand the conditions for such attacks which I feel is important. The short plaintext attack was reminiscent of the DES system with its XORing, but I didn't really follow how it went.

Friday, October 8, 2010

3.12, Due on October 11

One portion of this section that was unclear to me was the end of the Theorem on page 103. It states that r/s=p(sub i)/q(subi). I thought maybe the p and q were distinct primes but then in some of the examples they had p and q not be primes (such as 22/7 as an approximation to pi). I also don't see why they need a sub i attached to them. How are these two numbers related? I also didn't follow how they worked backward on the second to last problem on page 103. The authors started with 12345/11111 and then somehow pulled all these fractions out. I don't know how they did that.
This idea of approximating without a calculator is very intriguing to me. I feel like I am learning all sorts of neat math tricks in this class that, if nothing else, I can use to impress my friends with my non-calculator math skills. I like these continued fractions although I will admit I find them to be a little detached from the rest of the unit. I'll be eager to see how they fit in with the RSA.

Thursday, October 7, 2010

6.1, Due on October 8

The idea of the RSA was interesting to me. The idea that a key can be made public and still a message made secure is, in my mind, revolutionary. I really like the idea of the RSA even if I don't understand everything about it. On the one hand, it seems to me this idea should have been discovered before the 1970s. But then on the other hand it seems that computers weren't capable of this type of mathematics until the 1980s so the method seems ahead of its time.
I was confused about how the phi function fits into this algorithm. I think I understand what's going on with the two distinct primes but I got confused with the phi. I also got a little lost in the example, but that will probably be cleared up in class this afternoon.

Tuesday, October 5, 2010

3.6, 3.7, Due on October 6

I like the idea of simplifying the exponents to get an easier to work with number. It also never occurred to me that exponents need to be 'modded' as well as their bases when working in modular arithmetic. This seems like a simple idea I should have picked up on sooner, but I never did until this section.  I also enjoyed the part about primitive roots. They remind me of generators in finite groups. Essentially, the two are akin. As much as I liked them, however, I did get confused with the proposition immediately following their introduction.
One thing I'm confused about is the simplicity of calculating Euler's phi function. It seems easy for small, two digit numbers but much harder for larger numbers. I think this answer is found on page 81 but I couldn't quite interpret the notation to come into line with my question.
I think there will be great consequences out of Fermat's Little Theorem and Euler's Theorem, but I haven't quite grasped them yet. Their proofs and applications were hard for me to follow but going over it again in class will be very valuable.

Sunday, October 3, 2010

3.4-3.5, Due on October 4

I enjoyed the opening description of the Chinese Remainder Theorem. I found it clever of the authors to work backward in explaining the Theorem, and it certainly helped me understand better what the main idea of the theorem is. I was also impressed with the efficiency of the modular exponentiation in section 3.5. I love being efficient and this struck me as a very creative way to simplify exponential, modular arithmetic. When people say that we don't need to learn math because computers can just do it I would cite this modular, exponential arithmetic to them, where the computer can't do the math with its limited memory but a human brain with its ingenuity can.
The hardest part for me to follow was the general form of the Chinese Remainder Theorem. I understand that a solution is guaranteed to exist but I didn't feel that they gave a very good explanation as to how to find the solution when there are more than two primes that we are modding.