Friday, December 3, 2010

16.5, Due on December 8

This section was interesting to read because it was neat to see an alternate way of encrypting messages, signing documents, and exchanging keys using the elliptic curves. Finally I see the application of this whole chapter on elliptic curves! I also found the fact that this method protects against values of n with small prime factors to be interesting and poignant.
From section 16.5.1 I am wondering how a message is represented as a point on a curve. Apparently it was described in section 16.2 but I don't recall how to do it. One other aspect that I'm a little shaky on still is adding points to get a third point on an elliptic curve. [This is probably because I am writing this quite early, and we have two more class periods of practice before we get to this lesson.] I bring this up because adding points on an elliptic curve seems to at the heart of these crypto-systems.

16.4, Due on December 6

This section is really just a more specific portion of the other sections. I found it interesting that all the tangent lines are vertical to the curves of this modified elliptic curve. This type of crypto-analysis incorporates partial derivatives and lots of Calculus which I can appreciate.
I understand that elliptic curves are much easier to work with in mod 2, but I don't see why they are easier to work with in mod 2^n. That appears to be a big advantage but I don't see why. I am also confused about the end of the example on page 362. I am not sure how the authors found -(w,w^2).

Thursday, December 2, 2010

16.3, Due on December 3

Frankly, I've found this whole chapter to be interesting. I love Algebra and this is a type of Abstract Algebra that is interesting. I find it fascinating that is can be used to factor large numbers. I don't understand everything that's going on but the main ideas are really intriguing. One of the things I don't understand is how it's easy to find B!P. That seems like a lot of work.
I also have two lingering questions about elliptic curves:
  1. Doesn't a curve technically have an infinite amount of points? When we worked on the elliptic curve Z(E) mod 5 we only came up with 9 points. Are those the only solutions because our solutions must be integers mod 5?
  2. When we are dealing with a finite number of points, such as Z mod 5, how can infinity be a point? It's hard for me to grapple that a curve with a finite number of points and finite values for coefficients can have infinity as a point. Am I missing something or is that just a difficult, abstract concept?

Tuesday, November 30, 2010

16.2, Due on December 1

The first portion of the section where the authors describe how we are going to use elliptical curves to factor large numbers was very interesting. "The situation where gcd(b,n)=1 fails... form the key to using elliptical curves for factorization" (pg. 353). I like that little teaser and it's making me moderately excited to read section 16.3.
I also found the first portion to be a review of what we went over on class on Monday. It's nice to approach the reading having already seen an example in class.
The hardest section for me was the last one about representing plaintext. This confuses me. I thought we were using the elliptical curves to factor n, not to encrypt anything. I also had a question about the example on page 353. Where did the equations for what x3 and y3 are congruent to come from? I see where the values came from but not the equations.

Monday, November 22, 2010

16.1, Due on November 29

I kept waiting for the cyrptography application in this section but it never came. Hopefully in the later sections I will see how it relates. I enjoyed reading the historical point on page 349, and it cleared up some confusion about the naming of elliptical curves for me. I also found the very last sentence of the section to be interesting, that infinity is the identity element of the abelian group. I will be very interested in hearing your explanation of that. It reminds me of complex analysis.
The hardest part for me to understand was the addition of elements in the group. I didn't quite follow the group what see what the point was. I struggled with this section a little bit just because I didn't see the motivation for the math. It seemed like the authors were just babbling on and I wasn't sure why. Are all the points on an 'elliptical' curve an abelian group with infinity as the identity?

2.12, Due on November 23

I always enjoy reading about the history and application of the different cryptosystems we have learn about. I liked reading about how the British sold Enigma machines to other countries without telling them it had been cracked. I also liked reading about the different aspects of the machine that made the different combinations plentiful. It's interesting to learn about early, physical cryptography machines, especially in light of the Quantum Computing we read about earlier.
One question I have is that it seems this is just a substitution cipher? I thought that at first, then it sounded like there was more going on, but then on the last page I read about Rejewski and his colleagues creating a code book for the different combinations, and therefore "the effect of the plugboard was then merely a substitution cipher"(pg. 55). What is going on here that makes this different from a substitution cipher?

Shor Reading and 19.3, Due on November 22

I enjoyed the reading on the web about Shor's Algorithm. It was written in a very lighthearted manner tailored to the layman. His analogies were very good and he brought up some great points. It's interesting to me to think about modulo exponentiation and factoring large values of n.
The part about the reading that confused me was the the clock example. I understood the 'experiment' but I didn't understand what we can deduce from the results.
The book reading was not very easy to follow. If I were to ask one question about it I would ask where the Fourier Transform came from? It seems pretty crucial to the Shor Algorithm but I found its explanation to be less than satisfactory. I am also confused about how the graph relates to the algorithm.

Thursday, November 18, 2010

19.1 and 19.2, Due on November 19

One big question I have is regarding the strange physics notation. I have never seen vectors represented that way and I'm a little confused as to what they mean. Another question I have is: What does this have to do with cryptography? I was so confused by the reading that I didn't even see it's application to cryptography.
The thing I'm most looking forward to in this unit is the actual experiments in class. I get that we are representing light as a vector but I think it will make more sense in class when we actually do the experiment. I also like the idea of using light as a medium for sending hidden messages, even if I don't understand why or how.

Tuesday, November 16, 2010

14.1 and 14.2, Due on November 17

The first sections was the most easily understood by me. I enjoyed reading about the Zero-Knowledge Protocol and the Tunnel diagram was very helpful. It's interesting to me how useful different mathematical ideas are in real contexts. The notion of finding square roots as a tool to secure information is really cool to me.
The second section was less easy for me to follow. I didn't catch all the steps, such as what Peggy and Victor need to check. I am wondering if this Feige-Fiat-Shamir identification scheme is meant to be a cryptosystem. I think it is not. I am also curious to know where else I might see Zero-Knowledge Techniques in place. It seems like an interesting idea and a quick Wikipedia search taught me a little more about it. I found it not so ironic that the example on Wikipedia used Victor and Peggy as well. Those must be real mathematical terms.

Saturday, November 13, 2010

12.1 and 12.2, Due on November 15

One confusing part for me was in the example at the bottom of page 299-300. By using the Lagrange interpolating polynomial they came up with some numbers but I don't understand where the numbers came from. I'm not sure why they divided some of the large integers by 5 either. I understood most of the example except for that part. I also didn't understand the Vandermonde Matrix. That probably led to me being confused about the Lagranfe interpolating polynomial.
The interesting part of these sections for me was the different approaches to solving the same problem. As a mathematics education major we are always taught to represent things in multiple ways. The book did a nice job of explaining how to break this secret in three different ways. I also liked the first section because it was explained well and not too difficult.

Thursday, November 11, 2010

Exam II Review

I will admit Dr. Jenkins, I am pretty nervous for this second exam. To try and alleviate that stress I'm doing my best to be well prepared and studying early. As I have been studying the topics I feel the least sure about are:
  • The Pollard Rho Factorization Method
  • Diffie-Hellman Key Exchange
  • Encryption with Hash Functions
  • Digital Signatures
Those are the concepts that make me scared. As far as the problems we should be able to know, I have no idea what to expect in regards to:
  • Signing documents using RSA or ElGamal
  • Describe strengths, weaknesses, and attacks for algorithms we have studied in class.
As I have been studying I am consistently pleased with how much more sense the mathematics makes the second time around. There are many ideas that I understand now that I didn't know when I first read and learned about them. That is making me more confident in my ability to do well on the test and I'm hoping that as I keep studying they will continue to make more sense as opposed to me just memorizing what to do.
As far as questions that I expect to see on the exam, I am guessing there will be a problem regarding finding a square root, a problem regarding calculating exponentials modulo n, and calculating Jacobi or Legendre symbols. I bet there will also be a problem where we are asked to describe one of the primality tests or factorization tests, and one where we are given a cryptosystem ans asked to describe a weakness or strength of it.

Tuesday, November 9, 2010

8.3 and 9.5, Due on November 10

I did not enjoy reading section 8.3. Hash functions have been a struggle for me from the start and this section only added to my woes. I don't understand the SHA-0 process, why letters such as A and B equal 1010, 1011 respectively, or what the 'W' variables are. My brain just does not think like a computer.
I enjoyed reading section 9.5 much more. You discussed it briefly on Monday and it's always easier to read something when the ideas have already been introduced. I like the idea of having alpha^q =1 mod p instead of using alpha as a primitive root.
I have one small suggestion regarding the homework. It would be nice if there was a key to selected problems posted online. I can usually solve all ten problems but I get the feeling there is often a better way than the method I chose. If it's too much work don't worry about it, and the grader is pretty good about catching and marking individual errors, but if you already have a key for the grader it would help my test preparation to see other ways of solving the problems.

Friday, November 5, 2010

9.1-9.4, Due on November 8

I enjoyed reading about the Birthday Attacks on Signatures. I believe you touched on it in class on Friday because it sounded very familiar. I also liked reading about the ElGamal Signature Scheme because it sounded familiar from what we have already learned studied.
Now, there are a few things I am still confused about. One big item is hash functions. I know that hash functions do not compose a cryptosystem but they are obviously used in cryptosystems. Once something has been sent through a hash function how do you get the message out again? I am very confused about them and their place in cryptosystems. Another question I have is about digital signatures. Are digital signatures anything special besides a specific message? It seems to me that they are essentially just messages that are unique to a sender. I'm not sure why the entirety of chapter nine is dedicated to digital signatures if they are just certain types of messages.

Thursday, November 4, 2010

8,4-8.5, 8.7, Due on November 5

The section on the Birthday Paradox (and Birthday Attack) was the most interesting to me. I have learned that paradox before, and even put the mathematical probability to the test. I was at a party and counted 22 people so I went around and asked everyone for their birthday. Unfortunately there were no matches, making me look pretty foolish, but I told them if one more person showed up they would have the same birthday as someone already there. However, I realize now even with 23 people the probability is still only slightly better than 50%. Maybe next time I'll try it with 60 people. With the Birthday Attack I am wondering why anyone would use it instead of the Baby Step Giant Step method. The book claims that it doesn't provide a guarantee of a match, and that the BSGS method is generally faster anyway.
The last section on using hash functions to encrypt was hard for me. It looked a lot like the example you did on Wednesday but I'm still a little confused. I keep thinking hash functions are going to look like nice algebra functions but it's becoming increasingly more obvious that they are a whole different breed of functions.

Tuesday, November 2, 2010

8.1-8.2, Due on November 3

One connection I can draw from these sections is that the hash functions feel a lot like the AES and DES algorithms. Maybe it's the XORing or the three letter acronyms (MD4,MD5) but something about has functions reminds me of those earlier computer algorithms. However, I think the difference comes in the fact that the has functions are one way. I recall finding inverse operations for the different modes of operation in an earlier homework and I don't believe that is possible with has functions.
While there were many confusing parts for me, the one I am most intrigued and eager to learn more about is the proposition on page 221. I don't see why such a vague supposition can lead to the answer to a discrete logarithm problem which we just learned is very hard to solve. It seems that if we have a function that is not surjective we shouldn't be able to solve a discrete logarithm problem. [I think that's what m not equal to m' but h(m)=h(m') implies, although I'll admit it's written differently than I normally see it and I'm not totally convinced myself that it does imply it's not surjective].

Friday, October 29, 2010

7.3-7.5, Due on November 1

I am interested in how cryptography answers a variety of problems that go beyond just disguising messages. For example, I was intrigued by section 7.3 in regards to making sure Alice doesn't change her message and Bob doesn't read the message until after an event as occurred. In section 7.4 the authors claim that there is a situation involving elliptical curves where the decision Diffie-Hellman problem is solvable quickly which I am interested in. I recall getting a teaser in an earlier section about elliptical curves as well.
The hardest part for me was understanding the ElGamel cryptosystem. I understand the big idea because I understand public key cryptography (not completely, but mostly) but I don't see how the public and and private keys are derived in regards to the discrete logs problem. The RSA was easy because the private key was just two large primes, but I'm not sure what the private keys are in the ElGamel system.
P.S. My high school German teacher was named Mrs. El Gamel and she told me 'El Gamel' was Arabic for 'The Camel.'

Wednesday, October 27, 2010

7.2, Due on October 29

I enyoed reading about Baby Step, Giant Step (section 7.2.2). Not only did it remind me of Buzz Aldrin's famous quote, but it also reminded me of the meet-in-the-middle attacks from the earlier sections. I also liked reading this section in light of the material about factoring. Knowing that the past two weeks have been spent learning how to factor in order to break weak RSA systems helped me understand why we are learning all this information about discrete log problems. I'm guessing that the discrete logarithm problems are going to be used in a public key cryptosystem and all this information we are learning is going to help us crack weak versions of that system.
The two hardest subsections for me were 7.2.1 about the Pohlig-Hellman Algorithm and 7.2.3 about the Index Calculus. The examples were quite long and despite finally understanding the log notation I had difficulty understanding it in light of the examples.

Tuesday, October 26, 2010

6.5-6.7 and 7.1, Due on October 27

The most interesting portion of these sections for me was the idea of trapdoors. I liked reading about the variety of trapdoors available and how they differ according to the public key system. I also enjoyed reading about other types of public key cryptosystems besides the RSA. It's easy to avoid thinking outside the box but this section helped me see some other alternatives to the RSA.
One thing I didn't really understand was the discrete logs in section 7.1. I am familiar with the idea of logarithms but I don't understand the x=L[subalpha](Beta) notation. I also don't really see how the discrete logs apply to anything so far. I imagine their application will become more apparent in the ensuing sections. The other hard section for me was 6.6- An Application to Treaty Verification. I don't quite know what all is going on there.

Sunday, October 24, 2010

6.4.1, Due on October 24

I guess the thing I am most confused about is how we factor the numbers into smaller primes. The example of the basic principle on page 183 starts by saying 9398^2 = 5^5 * 19 (mod 3837523) but I don't really see how they got that or how I would factor an even bigger number. I believe, upon finishing the chapter, that they give the explanation on page 185 but I didn't quite see how it all related. I'm not sure where this equation of in+sqrt(in) + j^2 came from or how it helps us factor n into smaller primes.
The most interesting part of this section for me was the idea of using matrices. I enjoy linear algebra and I loved my Matrix Analysis class so I'm pretty eager to see additional applications of matrices. This matrix formed in this section must serve a purpose, although I have not successfully identified it yet. I understand how the matrix is formed, but I don't see where the linear independence comes in, or why we move to mod 2.

Thursday, October 21, 2010

6.4, Part I, Due on October 22

Fermat's Factorization was the most interesting portion of this quick section. It is so simple but very powerful for small numbers and it still took a genius to come up with it. I like it because I can understand it and it uses the idea of working backwards to factor (relatively) large numbers.
The p-1 Factoring Algorithm was the only other topic discussed, and it wasn't crystal clear to me. One question I have is how B is chosen. It's only description is that it is a bound but I'm not sure what it is bounding. This method looks a lot like the Miller-Rabin Primality Test  except it actually gives us factors instead of just determining if a number is prime or composite.

Monday, October 18, 2010

6.3, Due on October 20

This whole section was interesting to me and I enjoyed learning about ways to get around factoring numbers. I found the methods to be very intelligent solutions to a very difficult problem. It's wise of mathematicians to realize that it's a lot easier to see if a number is composite then to see if it's prime, and since a number can't be both you only need to check for one or the other condition. One thing I like about mathematics is that it uses efficient methods to solve difficult problems and this is definitely one such case. I also find it interestesting that as important as primes are we still have no way to determine if a very large number is prime. Mathematics is definitely a living field with intriguing problems still existing today.
The actual primality tests were difficult to follow. Fermat's test was easier to understand (at least the suppositions) but the latter two tests were much more difficult to read and follow. Their preceding arguments were bolstered by examples but I so not understand what exactly is going one with these tests.

Friday, October 15, 2010

3.10, Due on October 18

I enjoy this idea of square roots in modular arithmetic. Sometimes when I get a little confused in modular arithmetic I just remind myself what modular arithmetic actually means, and when I put the equation into 'regular' arithmetic it makes more sense. One question I have about the Legendre and Jacobi symbols is regarding their relationships to fractions. Is the fraction notation used just as an arbitrary symbol or is there a connection between the modular arithmetic and fractions? I'm guessing there is a connection but I haven't made it yet.
I became a little confused with the examples. One difficulty for me was that they kept referring to the steps of the Theorem on page 91 in the examples, and I kept having to flip between pages to understand the reasoning. While it may sound trivial, having to flip the pages over and over again made the example much harder to follow. In class we have the advantage of writing the theorem down on one board and doing an example on the other so that they may be seen simultaneously.

Thursday, October 14, 2010

3,9, Due on October 15

I like algebra and I like modular arithmetic so I enjoyed this section. But, I didn't exactly follow all that transpired and I have a lot of questions. For example, the Proposition on page 86 requires p to be congruent to 3 mod 4, but I am wondering how often this if going to happen. Is this a frequent characteristics of primes? I'm also wondering how they got that (p+1)/4=3 in the example on page 87. I see that p=11 but I'm not sure why they broke it up in that way.
I enjoyed the last half of page 87 where the authors talk about breaking up the modular to get two different congruences, but I don't see it as too helpful since most of our mods in this section seem to be prime numbers. What is the connection there? I can see where this is going- factoring n- and I like it. I'm guessing that in certain instances of the RSA n is going to be factorable.

John Friedlander Colloquim

I found John Friedlander's talk on prime numbers to be very interesting and directly in line with what we have been covering in Cryptography. I was especially intrigued by the twin prime pairs and the question regarding if they are infinite or not. Like most  people I like problems whose statements are easy to understand but less simple to prove. Knowing the different names or classifications of the primes also reinforced to me their importance, such as the Mersenne and Fermat Primes. One connection Mr. Friedlander made that I had not noticed previously was the one with the different steps of the Euclidean Algorithm and the gcd in each step. When he wrote (91,41)=(49,42)=(42,7)=7 the link between the Euclidean Algorithm and the gcds made much more sense. The most confusing part for me was the Sieve of Erathosthenes. In the limited time spent on in I didn't catch exactly how it worked. I did like the Legendre Formula though, even if I didn't fully understand it.

Tuesday, October 12, 2010

6.2, Due on October 13

One interesting thing for me in this section was just how smart some people are. I mean really, the attacks they have on this system are mostly Algebra based and yet they are so complex and specific that it is mind boggling how they came up with them. It was also interesting to see how the continued fractions played a part in these attacks. I thought that section was pretty random but now I see how it fits in.
What does "In time polynomial in log n" mean (page 170)? I've seen similar wording a few times in the book. Except for the low exponent attack which made some sense, I didn't understand the other attacks. I did, however, understand the conditions for such attacks which I feel is important. The short plaintext attack was reminiscent of the DES system with its XORing, but I didn't really follow how it went.

Friday, October 8, 2010

3.12, Due on October 11

One portion of this section that was unclear to me was the end of the Theorem on page 103. It states that r/s=p(sub i)/q(subi). I thought maybe the p and q were distinct primes but then in some of the examples they had p and q not be primes (such as 22/7 as an approximation to pi). I also don't see why they need a sub i attached to them. How are these two numbers related? I also didn't follow how they worked backward on the second to last problem on page 103. The authors started with 12345/11111 and then somehow pulled all these fractions out. I don't know how they did that.
This idea of approximating without a calculator is very intriguing to me. I feel like I am learning all sorts of neat math tricks in this class that, if nothing else, I can use to impress my friends with my non-calculator math skills. I like these continued fractions although I will admit I find them to be a little detached from the rest of the unit. I'll be eager to see how they fit in with the RSA.

Thursday, October 7, 2010

6.1, Due on October 8

The idea of the RSA was interesting to me. The idea that a key can be made public and still a message made secure is, in my mind, revolutionary. I really like the idea of the RSA even if I don't understand everything about it. On the one hand, it seems to me this idea should have been discovered before the 1970s. But then on the other hand it seems that computers weren't capable of this type of mathematics until the 1980s so the method seems ahead of its time.
I was confused about how the phi function fits into this algorithm. I think I understand what's going on with the two distinct primes but I got confused with the phi. I also got a little lost in the example, but that will probably be cleared up in class this afternoon.

Tuesday, October 5, 2010

3.6, 3.7, Due on October 6

I like the idea of simplifying the exponents to get an easier to work with number. It also never occurred to me that exponents need to be 'modded' as well as their bases when working in modular arithmetic. This seems like a simple idea I should have picked up on sooner, but I never did until this section.  I also enjoyed the part about primitive roots. They remind me of generators in finite groups. Essentially, the two are akin. As much as I liked them, however, I did get confused with the proposition immediately following their introduction.
One thing I'm confused about is the simplicity of calculating Euler's phi function. It seems easy for small, two digit numbers but much harder for larger numbers. I think this answer is found on page 81 but I couldn't quite interpret the notation to come into line with my question.
I think there will be great consequences out of Fermat's Little Theorem and Euler's Theorem, but I haven't quite grasped them yet. Their proofs and applications were hard for me to follow but going over it again in class will be very valuable.

Sunday, October 3, 2010

3.4-3.5, Due on October 4

I enjoyed the opening description of the Chinese Remainder Theorem. I found it clever of the authors to work backward in explaining the Theorem, and it certainly helped me understand better what the main idea of the theorem is. I was also impressed with the efficiency of the modular exponentiation in section 3.5. I love being efficient and this struck me as a very creative way to simplify exponential, modular arithmetic. When people say that we don't need to learn math because computers can just do it I would cite this modular, exponential arithmetic to them, where the computer can't do the math with its limited memory but a human brain with its ingenuity can.
The hardest part for me to follow was the general form of the Chinese Remainder Theorem. I understand that a solution is guaranteed to exist but I didn't feel that they gave a very good explanation as to how to find the solution when there are more than two primes that we are modding.

Thursday, September 30, 2010

Test Preparation, Due on October 1

  • Out of the topics we have studied thus far I think the different cryptosystems are most important. We talked a lot about them and spent considerable time going over how they work and how they are decrypted. I think they are important to know and will offer a nice background for the rest of the course.
  • Some questions I expect to see on the exam are short descriptions of concepts. Many of the cryptosystems we covered are hard to test on paper with a simple calculator, but they can be described through words. I am expecting some short answer questions about the DES, AES, Viginere Cipher, Hill Cipher, and the like. I also suspect there to be some questions regarding modular arithmetic and finding inverses.
  • One thing I need to work on to better be prepared for the exam is exactly what I wrote about above, understanding the big and little ideas involved with the different cryptosystems. I need to review the modes of operation as well. I feel fairly confident with the math, but my overall knowledge of the systems is lacking.

Tuesday, September 28, 2010

5.1-5.4, Due on September 29

One aspect that I am curious about is the phrase "highly unlinear." The S-boxes of the Rijndael System are repeatedly described this way and I've heard that phrase used in class as well. Does this phrase have a unique meaning or does it just mean what it sounds like? And why do they throw the highly in there? It seems that being linearity is a black and white topic, without any variation or degrees. I find the idea of multiple rounds to complicate a cryptosystem to be a very good idea. It complicates the message greatly while only complicating the algorithm slightly.
Section 5.2.5 on The Key Schedule was the hardest portion of the reading for me to follow. I can catch the bigger picture about what is happening with the different layers but the key schedule was very confusing. I understand that W(i) is a column vector but I don't know what happens when it is transformed with T(W(i-1)) and I don't understand what the r(i) represents. What exactly is the round key?

Saturday, September 25, 2010

Due on September 27

  1. The homework assignments have been taking me about 6 hours to do. I generally work with Kadi and Sharon for a few hours on Monday afternoon, and then I work on the remainder of the problems on Tuesday. Any that are still giving me trouble I ask you about enduring office hours on Wednesday. I have felt that the lecture and reading assignments have helped prepare me for them.
  2. The homework has contributed the most to my learning in this class so far. It's hard to pick one thing, though, because they all work together for my learning. I get the least out of the reading, but perhaps that's because I do it first so the concepts are new. Lecture is the second most helpful, and trying out the concepts in the homework is the most helpful.
  3. I suppose if there was one. thing I would suggest to make this class better it would be more descriptive notes on the board during lecture. I am often a few steps behind you when I'm copying down the notes and when I catch up I can't remember what you said and the notes do not give me any clues either. I just copy it down and worry that I will ask a question you just barely answered while I was copying down the previous section. It's not a big deal at all, but one way that would improve my learning.

Wednesday, September 22, 2010

3.11, Due on September 24

I really like Abstract Algebra (that's easy to say now) so naturally I enjoyed this section. I enjoy doing multiplication in modular arithmetic because I think it simplifies the numbers greatly. At first I was wondering how the modular arithmetic related to the cryptosystems but then it all made sense when I saw the correlation with the bytes key.  This was a good review of abstract algebra but I think it will ultimately make more sense when I put it to practice.
The hardest part of this section was the last part about the LFSR sequences. My linear algebra was sharp at one time but now it is in need of a review. I also found that last part to be somewhat detached from the rest of the chapter. I didn't see how it related. I also didn't follow all the example. I am curious to know what the connection between 2 to the 8th and the 8-bit bytes are. I know they are related but I'm not sure how.

Tuesday, September 21, 2010

4.5-4.8, Due on September 22

There were interesting and confusing parts of these sections. The first thing that struck me was that much of the DES breaking (and code breaking in general) is dependent upon memory. "This is possible with a lot of memory" seems to be a common phrase in this book. In this decade we have nearly limitless amounts of memory available (that's my own thought; I could be wrong) which will require the algorithms needed for crytosystems to be much more difficult. I was also interested in the password security section. I assumed passwords were more effective if they were >8 characters long and included symbols because they were harder to guess. Section 4.8 made me think that those types of passwords are harder to guess, but also they are easier to encrypt and harder for Eve to decrypt.
The modes of operation were very difficult to follow. I understand that they are variations of the DES, and Monday's lecture helped me understand the big picture of what was going on but I definitely got lost in the specifics of the different modes. I don't know why they are different or what it would look like to execute them all. I think the keys are still just strings of 1s and 0s.

Friday, September 17, 2010

4.1-4.2 and 4.4, Due on September 20

I enjoyed reading about the competition in the 1970s that eventually resulted in the design of the DES system. It was fun to read about whether IBM put a trapdoor into the system, or if the NSA put their own trapdoor into the system. Who can you trust when you have someone else come up with a crypto system for you?
Speaking of the DES, if I were to summarize all of my confusion into one word, it would be: Diagrams. I did not follow what was going on in the diagrams or the functions of the DES. I think I could follow most of the particulars if I knew what the big ideas were. I understand that some of the plaintext letters go to more than one ciphertext letters (although not all of them do) but I didn't get the big idea. One of my stumbling blocks is the concept of a 'bit.' Bit seems to have a very loose definition. The book says that a bit is a 1 or a 0. When we talk about a 64 bit key, do we mean the key is a 64 digit number composed entirely of 0s and 1s? It seems that sometimes bits refer to letters, but perhaps what they mean is that a letter is transformed into a number, and that number has only 0s and 1s.

Wednesday, September 15, 2010

2.9-2.11, Due on September 17

One fascinating thing about this section for me was the idea that true randomness essentially only exists in nature. My father works for an electronics company that makes semiconductors so my curiosity was especially piqued when I read that the 'thermal noise from a semiconductor resistor is known to be a good source of randomness' (pg. 41). Computers help with more aspects of cryptography then I ever thought, but they haven't been able to generate true randomness yet.
The hardest part of these sections was the Linear Feedback Shift Register Sequences. I think that the idea is to create a ciphertext out of a one-way function in mod 2, but actually coming up with such a sequence and then decrypting it went over my head. I think that all the x variables are known pieces of the binary ciphertext, but I'm not sure. The matrix in the proposition on page 48 also confused me. This cipher reminds me of the Hill cipher because of the usage of linear algebra, although it appears to be much more complex.

Monday, September 13, 2010

2.5-2.8 and 3.8, Due on September 15

The most interesting thing for me in these sections was learning about the variety of crypto-systems. I was very interested to learn about the varieties of systems that have been invented, and then cracked. It seems that just as much ingenuity, creativity, and intelligence is required to make up a code system as is required to crack one. And I don't just mean cracking it once; I mean figuring out a systematic method for always breaking an encrypted message given the encryption method. When I first read about Kerckhoff's principle it seemed fairly insignificant to me, but now it is increasingly making more sense.
One thing that confused me about the inverse matrices was how the fractions related to their inverses. For example, the example in the book finds that 5 is the inverse of -2 mod 11. Then they make the conclusion that we can replace -1/2 by 5 in mod 11. How did they jump from -2 to -1/2? Another thing that was difficult for me was the ADFGX cipher. I didn't really understand how to formulate the matrices.

Friday, September 10, 2010

2.3, Due on September 13

This method- the Vignere Cipher- was fascinating. It's such a great variation of the substitution cipher that is easy to understand but very difficult to crack. I think it's interesting that two plaintext letters can map to the same ciphertext letter but still there is a unique encryption. Although I didn't understand it entirely, I also found the connection to vectors to be intriguing. One reason this class was attractive to me was that it appeared to be applied mathematics, and applied abstract algebra in particular. The Vignere Cipher is evidence of that application indeed.
The hardest part was forming and multiplying the probability vectors. I don't really understand what the letter frequency numbers mean. For example, the chart on page 17 says "a = .082" but I have no idea what the .082 represents. I know its higher than b=.015 so 'e' appears more frequently than 'b' but until I get some context for the numbers the vectors won't make much sense. (Upon writing that I am thinking that if I added up all the decimals they would sum to 1, meaning that there is a 8.2% chance that any given letter is an 'e'.) The other related hard part for me was figuring out why to multiply A (sub j) and A (sub i). Just in general the probability vectors were the hardest part for me.

Wednesday, September 8, 2010

Guest Lecturer, Due on September 10

The most interesting thing for me to learn about at the guest lecture was the extent that cryptography was used in the Church. I had no idea the Church leaders used it so much, although it makes sense that they would have a lot of information that they didn't want to be compromised. I was also intrigued by the pseudo-names used in the Doctrine and Covenants. In Alma 37:23 we read "I will prepare unto my servant Gazelem, a stone..." I had wondered what that meant and now it makes sense. (The guest lecturer mentioned in her lecture that Gazelem was a code name for Joseph Smith.) I was also intrigued by the Deseret language. I had not heard of that before.
The lecture wasn't too difficult for me to follow (it wasn't very mathematically dense), although it did leave me with some things to ponder about. I had never thought of using code words to minimize telegram expenses by minimizing words, and I never knew Lewis and Clark used cryptography to send messages back to D.C. Cryptography has been around for a long time and despite being somewhat primitive it has been very effective for a wide variety of organizations. Its uses definitely go beyond military purposes, which I did not think about before this lecture.

Friday, September 3, 2010

3.2-3.3, Due on September 3

  1. The hardest part of this section for me was the very first part about the extended Euclidean Algorithm. I remember doing it for smaller numbers with a few 'backward' steps, but I did not follow the longer examples. I was confused with the book's notation about what x (naught) and y (naught) were.  The examples today were helpful and I think I can do it with the second method proposed by the book on the bottom of page 69. Another hard part was the situation when x is raised to a higher power then 1 (pg. 75). I can solve those problems with guessing, but I don't think that's the best approach. 
  2. The most interesting part of the reading for me was the addition and multiplication tables in modular arithmetic. I am a highly visual learner and I love tables that compile so much information into a readable format. I like noticing patterns in the tables and using them to better understand exactly what is going on. In Combinatorics I used tables often to remember recursive relationships. 
Sorry this entry was late; I misread the reading assignments. I will not resubmit the assignment due on September 10, but if you want to read it again it's still up on my blog. Thanks.

Thursday, September 2, 2010

2.1-2.2 and 2.4, Due on September 10

The most difficult part of this reading for me was understanding the affine functions. I think my knowledge of multiplicative inverses in modular arithmetic is a little rusty and I couldn't recall why 1/9 is the multiplicative inverse of 3 in mod 26. I understood most of the other aspects of affine functions but that one little part about inverses and the need for gcd (alpha, 26) =1 eluded me.
The frequency analysis and its accompanying parts was the most interesting. Looking at the frequency table on page 25 showed me how close some letter frequencies are (for example, h was .061 and r was .060). That type of analysis seems to be only relevant for finding e- the dominant favorite letter- and maybe t and a. I enjoyed reading about the two-letter frequencies and the patterns in the English language that help with the analysis. It made me realize how difficult this could be if a message was intercepted and the language was unknown. The substitution cipher seems much easier to crack if the language is known.

Tuesday, August 31, 2010

1.1-1.2 and 3.1, Due on September 1

  1. The most difficult part of the material for me was the Theorem on page 68. The theorem makes sense to me but I got lost in the proof. I understand how it related to the Euclidean Algorithm but the numerous variables and their representations confused me. I think seeing it a second time in class will be helpful.
  2. One thing that I thought of when I read the first two sections was in regards to online security. In the last two years or so I have often been asked to identify or decipher a message while typing in my password on certain web pages. When I check my Gmail I am asked to type in a 'word verification' before I can sign in. These words are often nonsense and upon reading this section I am wondering if these 'word verifications' relate to password security. Someone told me that they are meant to make sure a human is at the computer and not some computer trying to hack into my email. I don't know what the answer is but I do believe it has to do with securing private information online.

Introduction, Due on September 1

  1. I am a senior this year, double majoring in mathematics and mathematics education. This is my last semester in Provo. In January I am moving to Washington D.C. to do my student teaching before graduating in April 2011.
  2. I have taken the following post calculus math classes: Math 290, Math 313, Math 334, Math 341, Math 342, Math 352, Math 371, Math 300, Math 362, Math 570, and Math 450
  3. I am taking this class because I have taken a class from you before and I think you are a great teacher. I need one more upper division class to graduate and when I looked at the options this semester and saw you were teaching this I wanted to take it. (I often choose my classes by teacher, not content. Last semester I didn't even know what Combinatorics was but when I found out Dr. Barrett was teaching it I signed up.) Dani Mendell took this class last year and recommended it to me as well. I am familiar with your teaching style, testing style and homework load, and I have found them to be very fair and conducive to learning.
  4. I do not have much experience with Maple or other programs. I took a Math Ed class called Teaching  with Technology last year where we spent about six class periods on Maple, but that's the extent of my knowledge. I took Computer Science 142 (Introduction to Java) three years ago but I really struggled in that class and I don't feel very proficient in Java or computer programming. I don't feel too comfortable using computer programs for the homework (yet); however I understand computers are a big part of modern mathematics and I am very willing to learn to program or use Maple. I think it will be a valuable skill in the future and college is probably a good time to learn it.
  5. My favorite math professor is Dr. Barrett. Once I discovered him I took everything he taught because I enjoyed his classes so much. What I enjoyed about him was his extreme organization and willingness to help. With some teachers I feel like when I visit them in their offices they are annoyed and make me feel less intelligent for not knowing what -apparently- I should already know. I never felt that with Dr. Barrett and he always put his students first. He also had a well organized homework and exam schedule so that we knew what the homework was going to be month in advance. 
  6. Something interesting about myself is that I love to to travel and study abroad, and I believe in getting a very broad education (not just math) during my undergraduate years. Last Summer I studied Shakespeare and Theatre in London and this year I did an international development internship in Thailand for three months. I lived with a native, Buddhist family and got to travel all over Thailand, Laos, and Cambodia. I taught English at a local Thai school in my village. I am trying to add variety to my math education and so far it's been providing me with a nice balance.
  7. Your office hours work well for me. Thanks Dr. Jenkins. I look forward to this semester.