Abusing Science

Number 67 of a series

I receive emails. Here is from another by Perry Marshall, author of Evolution 2.0.

But it’s not just cancer research. My Evolution 2.0 journey has made it abundantly clear to me that the system underlying virtually all science research is tragically flawed.

Now, if you’re thinking, “Why do I care about science being sold out to the highest bidder?” You can just look around you right now.

It’s an invitation to his podcast:

Science for Sale
Ken McCarthy & Perry Marshall
Wednesday April 22, 2:00 PM Eastern

Besides his Wikipedia entry, what I know of Perry Marshall is from his book. An excerpt provides some insight.

This is a science book, provoked by my burning question: If blind evolutionary forces can produce eyes and hands and ears and millions of species, then why don’t engineers use Darwinian evolution to design cars or write software? Why don’t they teach Darwinism in engineering school? Evolution and natural selection, after all, were heralded as all-powerful, to the point of having godlike qualities. If nature needs no engineers, a little evolution knowledge would surely be useful to us engineers who are stuck in cubicles designing cell phones.

Marshall, Perry. Evolution 2.0: Breaking the Deadlock Between Darwin and Design . BenBella Books, Inc.. Kindle Edition.

This is from a section with the title “What You Can Expect from This Book,” so it’s not part of the meat. It does give clue to the intent.

What Marshall wants you to understand is “blind evolutionary forces” are not sufficient. Else, engineers would use them to produce novel designs. I get from this he invokes purpose in seeking to make his case, which is what he intends to prove. Reading selections from the book will reveal Marshall is seeking to sell God. He wants to demonstrate a world created by God, as only a being of some sort can provide purpose.

Engineers developing a novel design do so with a purpose in mind. They want something that flies, so they are sure not to make it too heavy. Engineers would never invoke random choices to create a new and improved design.

Except sometimes they do.

Hybrid Genetic Algorithm and Linear Programming for Bulldozer Emissions and Fuel-Consumption Management Using Continuously Variable Transmission

This paper develops a hybrid optimization approach combining genetic algorithm (GA) and integer linear programming (ILP) to solve the nonlinear optimization problem of managing the fuel consumption and emissions of a tracked bulldozer. Furthermore, the authors propose that a continuously variable transmission (CVT) can better exploit the efficient zones of the engine maps. The original transmission system of the Caterpillar D6T bulldozer consists of a five-gear transmission, whereas the gear ratios of the proposed CVT are continuous and can be assigned according to transmission design. The fuel consumption and three emission items of the engine, unburned hydrocarbons (HC), carbon monoxide (CO), and nitrogen oxides (NOx), are studied. Vehicle-terrain interactions are formulated and the excavation program is characterized by excavation depth and speed. The target of the multiobjective optimization problem is a combination of fuel rate and three emission items. Results show that, for digging depths less than the bulldozer blade maximum digging depth, the target can be improved by more than 31% using CVT incorporated with GA compared to the conventional transmission, obtained by shifting engine operating points from low efficiency zones to optimum points. Finally, integer linear programming is used in a hybrid manner with GA to solve for the optimum combination of excavation steps in tasks of specified digging depths more than the maximum digging depth of the bulldozer blade. Results show that the proposed method can improve the target value up to 18% with the same digging time, and can improve the target value up to 32% using the hybrid optimization approach without time constraint.

The paper describes the development of an improved transmission design. The development employed genetic algorithms toward achieving an optimum design.

Genetic algorithms employ stochastic variation on workable designs to generate new designs, and then they select for those that perform better. To be sure, purpose is invoked here. Nature does not make use of purpose. What engineers achieve in short order using significant computer power, for a natural organism nature requires centuries and longer to instill “improvement.”  And remember, what nature ends up with may not be what we would prefer. The wild horses people domesticated thousands of years ago are the ones produced by nature. What people want, and what they have now, are horses artificially bred for our purposes.

I have no plan to view the pod cast, but readers are invited to search it out and sign up. There may be something significant relating to real abuse of science. The past few weeks have seen egregious abuse of science for political gain, as serious scientists are sometimes mocked (one receiving death threats). We can only hope real science will be taken more seriously when the current crisis is over.

The Years of Living Stupidly

Number 8 in a series

The photo shows creationists Walter Bradley and Ide Trotter at a workshop hosted by the Texas Education Agency, where they were assigned the task of reviewing high school biology texts for public schools.

I tend to devote this series to people being stupid about science, such as by using science and religion in the same sentence. That often comes about when people, caught up in religion, carry the contagion with them when they step across the line into fields of science—or into any other area requiring rational thought. Who does this a lot are the people at the Discovery Institute (DI), the premier organization in this country promoting Intelligent Design.

A rich resource on this kind of foolishness is the DI’s Center for Science and Culture (CSC), founded by creationist Stephen C. Meyer, among others. the thinking of DI fellows and the CSC are made public on an associated site called Evolution News. More recently, I found the following posted on the Discovery Institute site:

Neurosurgeon Michael Egnor: Why Machines Will Never Think

From remarks at the official launch of the Walter Bradley Center for Natural and Artificial Intelligence


This is interesting on multiple levels, one of which relates to Dr. Michael Egnor, whom we have met before:

Michael Egnor is a prominent neurosurgeon and a professor in the Department of Pediatrics at Stony Brook University. He became inoculated against evolution (the science of biological evolution) after reading  Michael Denton‘s book Evolution: A Theory in Crisis. Egnor has aligned himself with the Discovery Institute Center for Science and Culture (CSC), one of the leading creationist organizations in this country and likely the absolute leader in support of the Intelligent Design version of creationism. An excerpt from one of his posts on the CSC’s Evolution News blog may be characteristic:

Scopes was put on trial for violating the Butler Act, which prohibited teaching human evolution to schoolchildren in Tennessee. What Scopes actually taught, if anything, is unclear, because Scopes was untruthful about what he did, and the trial was a legal ploy to spur a Supreme Court ruling. The truth was a secondary consideration at best to Scopes and to his team.

Hunter’s textbook Civic Biology was racist and taught eugenics. It was vile stuff. If a teacher taught from it today, he would not be prosecuted for violating the Butler Act. He would be prosecuted for federal civil rights violations.

Follow the link and read the post for complete details. Anyhow, Dr. Egnor now proposes to  boldly step outside his realm of expertise and into the quagmire that is Artificial Intelligence (AI). I took some time to read through his thinking on the matter, and you are invited to do likewise. What I found is that, while Dr. Egnor is likely correct in some of his conclusions, he is correct for the wrong reasons. An illustration from Dr. Egnor’s monograph:

What is the hallmark of human thought, and what distinguishes thoughts from material things? Franz Brentano (1838–1917), a German philosopher in the 19th century, answered this question decisively. All thoughts are about something, whereas no material object is inherently “about” anything. This property of aboutness is called intentionality, and intentionality is the hallmark of the mind. Every thought that I have shares the property of aboutness—I think about my vacation, or about politics, or about my family. But no material object is, in itself, “about” anything. A mountain or a rock or a pen lacks aboutness—they are just objects. Only a mind has intentionality, and intentionality is the hallmark of the mind.

Another word for intentionality is meaning. All thoughts inherently mean something. A truly meaningless thought is an oxymoron. The meaning may be trivial or confusing, but every thought entails meaning of some sort. Every thought is about something, and that something is the meaning of the thought.

That’s what I like about philosophers. First, they fall back on what other philosophers have said—with little or no attempt at confirmation, and they talk of things being true, apparently for the sole reason that they say they are true. How about, “Only a mind has intentionality, and intentionality is the hallmark of the mind?” Whether he realizes it or not, what Dr. Egnor has just done is to write a definition for the word mind. Please note the statement does not preclude a computer becoming a mind. What it says is that if a computer attains intentionality, then a computer can become a mind. Dr. Egnor never offers any reason a computer cannot become a mind. He says it, so it must be so.

Under other circumstances I would pass off Dr. Egnor’s musing as the product of religious corruption. I cannot do this, because it happens that Dr. Egnor, in his musings, is in the company of mental giants, one being renowned mathematical physicist Roger Penrose. It happens that Penrose is of the same mind as Dr. Egnor in this matter. Neither believes a computer can become a mind, and Penrose has written a book on the matter titled The Emperor’s New Mind. I have had a copy of the book since it came out in 1989, but I did not read through it. That’s because I quickly encountered conclusions I cannot sign off on. In the book, Penrose seems to invoke the argument from incredulity, much as Dr. Egnor does above. Martin Gardner wrote the forward, concluding:

Penrose’s achievements in mathematics and physics– and I have touched on only a small fraction– spring from a lifelong sense of wonder toward the mystery and beauty of being. His little finger tells him that the human mind is more than just a collection of tiny wires and switches. The Adam of his prologue and epilogue is partly a symbol of the dawn of consciousness in the slow evolution of sentient life. To me he is also Penrose– the child sitting in the third row, a distance back from the leaders of AI– who dares to suggest that the emperors of strong AI have no clothes. Many of Penrose’s opinions are infused with humour, but this one is no laughing matter.

Penrose, Roger. The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics (Oxford Landmark Science) (Kindle Locations 143-148). OUP Oxford. Kindle Edition.

Here is a sampling of Penrose’s own thinking, highlighted in my Kindle edition by earlier readers:

Most particularly, I argue that the phenomenon of consciousness cannot be accommodated within the framework of present-day physical theory.

Penrose, Roger. The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics (Oxford Landmark Science) (Kindle Locations 153-154). OUP Oxford. Kindle Edition.

My reasoning, as presented in this book, has two main strands to it. The first of these endeavours to show, by appealing to results of Gödel (and Turing) that mathematical thinking (and hence conscious thinking generally) is something that cannot be encapsulated within any purely computational model of thought. This is the part of my argument that my critics have most frequently taken issue with. The second strand of the reasoning is to demonstrate that there is an important gap in our physical picture of the world, at a level which ought to bridge the submicroscopic world of quantum physics to the macro-world of classical physics. My viewpoint demands that the missing physics falling within this gap, when found, will play an essential part in the physical understanding of the conscious mind. Moreover, there must be something outside purely computational action in this sought-for area of physics.

Penrose, Roger. The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics (Oxford Landmark Science) (Kindle Locations 164-170). OUP Oxford. Kindle Edition.

Penrose appears to set aside a special place for living forms, and he does not limit this thinking to the mind. I took four courses from  Wolfgang Rindler, and after I got my degree I came back to the campus to attend a 70th birthday party for him. Roger Penrose attended, as well, and he gave a talk in which he explained that living material needs to be explained in terms of  quantum physics. I  got that this was an explanation in terms of quantum  physics beyond the fact that quantum physics determines basic chemical properties of the elements, and I asked the question, “Are you resurrecting the concept of vitalism?” He assured me he was not, and I let it go at that. For the moment. The truth is, I consider Penrose’s invocation of quantum mechanics as vitalism dressed up in a lab coat.

Apparently anybody can be a philosopher, so I’m thinking about giving it a try. In future postings I will provide rational explanations for human thought, life, death, and the origin of the universe. Keep reading.

Rolling Scam


In their book Cyberpunk, authors Katie Hafner and John Markoff, chronicle three notorious cases of computer intrusion from the 1980s, the early days of computer networks. The story of Project Equalizer involves a collaboration of cyber punks, including Hans Heinrich Hübner, aka “Pengo, and his collaborator Markus Hess. Hess brought down the group through his probing into American computer systems in search of military and trade secrets. The Cuckoo’s Egg is the tale by astronomer Cliff Stoll, working at the Lawrence Berkeley Laboratory, who noticed Hess’s activities and tracked him down.

It’s not just a story of a coming of age for computer intrusion awareness, but it’s also the story of astronomer Stoll’s coming to reconcile his naive world view. Not much is told of the author’s formative years, but by the time he was a 36-year-old post doc at UC Berkeley, his saw things through a short-focus lens that constricted his vision to preconceived notions born of political rhetoric. The Berkeley campus has since the 1960s epitomized radical, left-wing politics, and Stoll, when he came there from a Ph.D. program in Arizona, was apparently already shaped to fit in.

A post doc (post doctoral) position is a non-teaching job for a Ph.D. needing to do advanced work. The positions are typically not permanent, and when Stoll’s grant money ran out he retreated into a position managing the LBL computer system. As it turned out, scant hours into his new job, events launched Stoll into a career-bending trek through the underworld of cyber crime.

Directed to track down a $0.75 discrepancy in computer charges, Stoll lurched into the financial side of machine computation. In a facility such as LBL, computers are a service sold to users of the lab. Various research projects use lab facilities, and their research grants pay the costs of running the computers. center. Billings are apportioned for computer cycles, memory space disk space, and printing services used. Each billing cycle, projects that have accounts on the system are sent a bill. Only in this case, it appeared one user was shorting the system.

When Stoll investigated he discovered a user without an active account, and that user, sventek, was in the name of a researcher who had not been at LBL for a year. Some person was using computer facilities without authorization.

Astronomer turned system manager turned cyber sleuth Cliff Stoll documented his months-long quest for the elusive intruder, tracing him all the way back to Hannover, Germany. Along the way he learned the ins and outs of computer security, computer networking, data communications, governmental bureaucracy, and international law.

He set up a means to monitor the intruder’s actions without, himself, being detected by the intruder, and he watched the person’s activities. Alarmed, he observed the intruder using the LBL computer as a base for breaking into other computers at other locations, some as far away as Fort Buckner in Okinawa. Aghast, he observed the intruder penetrating computers operated by the secretive NSA.

Against his personal prejudices, he felt compelled to bring the case before the local office of the FBI. Surprise, they weren’t interested. They had bigger fish to fry. Their cutoff level was a $500,000 crime. Stoll ultimately found himself talking to, and involved with, the FBI, the CIA, the NSA, and military security services. Toward the end of his quest he’s was giving presentations at the highest levels of these offices at their Washington area headquarters.

Along the way Cliff Stoll pulled back the covers on some ugly truths about early computer security, some of which truths likely remain today. One is a method his intruder used to gain privileged use of the system.

GNU-Emacs is a file editor still in use and still popular with computer programmers. A feature of GNU-Emacs allows the user editing a file to forward a file by mail to another user. Once received, it would place the file in a specified directory without verifying the user had the privilege to write to that directory. The intruder used this mechanism to transfer a bogus atrun program file to a region giving it super user privilege. Once the transfer was accomplished, the UNIX operating system would execute the program, which was set to grant super user privileges to sventek. Super user privileges in a UNIX system place no restrictions on what an account can do.

Another thing Stoll observed was the intruder cracking passwords to log into other systems. This was made possible by users who selected account passwords found in an English dictionary. The intruder did this:

Using super user privilege, copy the password file to the intruder’s own computer. The password file looked something like this:

jason bdrxkdfjerrtuopqz
eglin  aakrddfblrnwpdof
morgan adrfdbxzporkwntf

And so on. There is a list of user names followed by encrypted passwords. When a user logs in he sees something like this:

login: jason
password: purple

The user supplies the login name and the password, shown in italics here. The password, purple, goes to the computer. The computer executes the encryption process against the password and compares the result with its encrypted copy. It does not save the bare password, so nobody can come back later and find the bare password on the computer.

The problem is, with a copy of the password file on the intruder’s computer, the intruder can at his leisure, compute the encryption of each word from an English dictionary and match the result against the encrypted password in the file. Once he finds an English word that produces a match, the intruder saves the clear code password for later use. The process can be automated, requiring about one second to encrypt each password for a VAX computer of those days. Guess what, today’s PCs are about 1000 times as fast as those VAXes. To shorten the turn around time, the intruder would likely execute the encryption algorithm once for a dictionary (about 100,000 words) and save the result for later use.

This only works if a user selects an English word. Any variation, random combinations of letters, mixed upper and lower case, the inclusion of numbers and special characters, would defeat this approach by making the search intractable. The sad fact, as Stoll discovered, is many users stuck with easy to remember English words, and the intruder sailed right into their accounts.

Worse yet, many system managers either retained default passwords for privileged accounts when they set up computer systems, or else they left privileged accounts wide open, requiring no password for access. A sample session recorded by Stoll shows the intruder breaking into a military computer by using the default password of a privileged account running on VMS:

Username: FIELD
Password: SERVICE

Computer System problems should be directed to the Information Systems
Customer Service Section located in building 130, room 2359.
Phone 643-2177/ AV 833-2177.

Last interactive login on Thursday, 11-DEC-1986 19: 11
Last non-interactive login on Tuesday, 2-DEC-1986 17: 30

WARNING— Your password has expired; update immediately with SET PASSWORD

Stoll, Clifford. CUCKOO’S EGG (p. 228). Knopf Doubleday Publishing Group. Kindle Edition.

The message indicates the field service account has not been used since 11 December and that the password has expired. Many systems have the feature to expire passwords, hopefully forcing users to change them periodically. In this case, the intruder was so bent on going after something on this system that he forgot to renew the password before logging off. Once he logged off he was unable to log back on. The irony is that Stoll contacted the Air Force System Command about the intrusion and was promised the matter would be taken care of. How the matter was handled was that word was passed down to system administrators that the field service password had expired. The manager of this system then reset the password: to SERVICE! The intruder was subsequently able to log onto the system at a later date! This kind of mindless system management was observed throughout Stoll’s adventure with the German intruder.

The story includes comedy as well as drama. Stoll set himself up with a pager to notify him whenever the intruder logged onto his computer. The pager barged in at the most inopportune times:

In the shower, I felt revived. Martha sudsed my back while I basked in hot water. Maybe the wholesome rustic life wasn’t so bad after all.

Martha was in the midst of shampooing my hair when the nasty whine of my beeper, buried in a pile of clothing, destroyed our peace. Martha groaned and started to protest: “Don’t you dare.…”

Too late. I jumped out of the shower and ran to the living room, switched on my Macintosh, and called the lab computer. Sventek.

A second later, I’m talking to Steve White at his home. “He’s here, Steve.”

“OK. I’ll trace him and call Frankfurt.”

A moment later, Steve’s back on the line.

“He’s gone. The hacker was here a moment ago, but he’s disconnected already. No use calling Germany now.”

Damn. I stood there in utter frustration; stark naked, wet and shivering, standing in a puddle in our dining room, dripping blobs of shampoo onto my computer’s keyboard.

Claudia had been practicing Beethoven, but startled by the sight of her roommate charging, naked, into the living room, she’d put down her violin and stared. Then she laughed and played a few bars of a burlesque tune. I tried to respond with a bump and grind, but was too obsessed with the hacker to pull it off.

I wandered sheepishly back into the bathroom. Martha glowered at me, then relented and pulled me into the shower again, under the hot water.

Stoll, Clifford. CUCKOO’S EGG (p. 255). Knopf Doubleday Publishing Group. Kindle Edition.

The shower incident inspired a suggestion from their roommate Claudia. Why not set a trap for the intruder. The trap involved creating a large amount of phony documentation on Stoll’s computer, documentation that alluded to SDI, the Strategic Defense Initiative popular with the Reagan administration at the time.

The intruder bought into it completely and downloaded massive quantities of useless data, requiring him to stay on line for hours while his connection back to Hannover was traced. Additionally, the documentation included a notice to contact the phony “Barbara Sherwin” at LBL for additional information. Weeks later, after Markus Hess had been arrested, Laszlo Balogh of Pittsburgh, Pennsylvania, sent a letter to “Barbara Sherwin,” requesting additional documentation. The trap was well and truly sprung. The intruder was the only person in the outside world who knew about “Barbara Sherwin.”

In all this Cliff Stoll was constantly stymied by lack of support from government agencies. Even after the FBI, CIA, NSA, and military security became involved in the case, no government funds were forthcoming to support the investigative activities at LBL. The government Department of Energy, which had taken over production of nuclear weapons and nuclear power following World War Two, funded LBL, and Stoll was, from time to time, cleared to continue working nearly full time on the project.

As the case wound down, Stoll could get no information about the intruder from government sources. It was as though he were plugging coins into a vending machine and not getting anything out. A visit to Washington to give lectures on the case culminated with a scheduled lecture at CIA headquarters in Langley. It turned out to be a presentation of a different kind:

So this is the meeting. It turns out that the seventh floor is the hide-out for the CIA’s high-muckity-mucks. Hank Mahoney’s the CIA’s deputy director; grinning nearby was Bill Donneley, the assistant director, and a couple others.

“You mean that you’ve heard about this case?”

“We’ve been following it daily. Of course, this case alone may not seem like much. But it represents a serious problem for the future. We appreciate your taking the effort to keep us informed.” They presented me with a certificate of appreciation— wrapped up like a diploma.

I didn’t know what to say, so I stammered out my thanks and looked at Teejay, who was chuckling. Afterward, he said, “We wanted to keep it a surprise.”

Surprise? Jeez— I’d expected to walk into a room of programmers and give a shoptalk on network security. I glanced at the certificate. It was signed by William Webster, director of the CIA.

Stoll, Clifford. CUCKOO’S EGG (pp. 319-320). Knopf Doubleday Publishing Group. Kindle Edition.

The book concludes with an epilogue, with Cliff and Martha, companions and lovers for over eight years, getting married and moving to the Boston area, where Martha obtained a clerkship at federal court. There, Cliff got involved again when the RTM worm of November 1988 hit his and about 2000 other systems on the Internet. One of the people he contacted early on was Bob Morris at the NSA, a computer security expert he had worked with while chasing the German intruder. Ironically, it turned out the person behind the notorious worm attack was Robert T. Morris, Bob Morris’s son, then studying at Cornell.

The book ends without telling of Stoll’s testimony in Germany in the prosecution of Markus Hess, Pengo, and others. In the end it was determined the Project Equalizer gang had only sold worthless information to the Soviets, and they received light sentences. One, Karl Koch, who used the nom de guerre Hagbard, was an mentally disturbed drug addict who apparently killed himself:

Hagbard was last seen alive on May 23, 1989. In an isolated forest outside of Hannover, police found his charred bones next to a melted can of gasoline. A borrowed car was parked nearby, keys still in the ignition.

Stoll, Clifford. CUCKOO’S EGG (p. 369). Knopf Doubleday Publishing Group. Kindle Edition.

One of the tremendous rewards of working at places like LBL is the opportunity to rub shoulders with the brighter lights of human society.


Depressed, I shuffled to lunch. At the LBL cafeteria, Luis Alvarez sat down across from me. Inventor, physicist, and Nobel Laureate, Luie was the twentieth-century Renaissance man. He didn’t waste time on bureaucracy; he demanded results.

“How’s astronomy?” Even from the stratosphere, Alvarez still found time to talk to pipsqueaks like me. “Still building that telescope?”

Stoll, Clifford. CUCKOO’S EGG (p. 105). Knopf Doubleday Publishing Group. Kindle Edition.

Luis Alvarez was awarded the Nobel Prize in Physics in 1968 for his work on particle physics. He is also credited with proposing an asteroid collision as a contributor to the extinction of dinosaurs 65 million years ago. We saw him featured in Richard Rhodes’ book The Making of the Atomic Bomb:

It’s about people who have already won a Nobel Prize getting dirty and carrying blocks of sooty graphite and packages of uranium compound into the lab. It’s (future Nobel winner) Luis Alvarez first learning of induced fission while reading the San Francisco Chronicle in a barber’s chair and rushing off to his lab, with an unfinished hair cut. It’s General Groves putting a tail on Robert Oppenheimer and learning that the married directory of the Manhattan Project science team spent the night with an ex-girlfriend, who was an avowed communist.

In reviewing Kindle editions I sometimes find problems with books that have been transcribed from hard copy. Often transliteration errors, due to failures of the OCR system, show up. That didn’t explain this strange construction I found on page 308:

But according to Marv, this guy less did it in three weeks.





This entry has been corrected to reflect that Pengo’s associate Markus Hess was the key hacker in the Project Equalizer episode.

A headline from a few years back:

New York Times Hacked: Website Back To Normal After Outage

08/29/2013 09:49 am ET | Updated Aug 29, 2013

The New York Times website was back up on Wednesday after what appeared to be an attack by the Syrian Electronic Army.

The website went down for most users Tuesday afternoon. The newspaper revealed shortly after the outage struck that it had been hit by “a malicious external attack.” The Syrian Electronic Army, the pro-Assad group that has targeted numerous news outlets in recent months, claimed responsibility for the attack.

Before there was ISIL, before there was the Syrian Electronic Army, before there was the World Wide Web, there was Cyberpunk.

In the historical study by Katie Hafner and John Markoff, Kevin Mitnick is a sociopathic night stalker, prowling at first America’s telephone networks and ultimately computer centers at the highest levels. Hans Heinrich Hübner, aka “Pengo,” is a disaffected youth coming of age in Cold War Berlin and turning to computer crime with visions of grandeur. Robert T. Morris is computer genius on the elevator to greatness before being seduced into the world of computer meddling. They are all cyberpunks:

Cyberpunk is a subgenre of science fiction in a future setting that tends to focus on the society of the proverbial “high tech low life” featuring advanced technological and scientific achievements, such as information technology and cybernetics, juxtaposed with a degree of breakdown or radical change in the social order.

Cyberpunk plots often center on conflict among artificial intelligences, hackers, and among megacorporations, and tend to be set in a future Earth, rather than in the far-future settings or galactic vistas found in novels such as Isaac Asimov‘s Foundation or Frank Herbert‘s Dune. The settings are usually post-industrial dystopias but tend to feature extraordinary cultural ferment and the use of technology in ways never anticipated by its original inventors (“the street finds its own uses for things”). Much of the genre’s atmosphere echoes film noir, and written works in the genre often use techniques from detective fiction.

Hafner and Markoff came out with their book in 1991, shortly after computer intrusion began to hit the national news in the 1980s. Headlines such as the one above are commonplace now, showing that these three cases of computer intrusion were just the camel’s nose. Darker things were to come.

So much of what the book is about is today passé. Technology has swept over the world of computer intrusion like an ocean wave, leaving anybody reading the book today marveling at the quaintness of it all. I’m not going to dissect the book. I will just run through the three episodes, giving interested readers a taste of a world gone by.

The book is three almost independent stories:

Kevin: The Dark-Side Hacker

A dysfunctional family is not essential to the creation of a social misfit, but in Kevin Mitnick’s case it was a kindly assist:

Kevin was the kind of kid who would be picked last for a school team. His oversize plaid shirts were seldom tucked in, and his pear-shaped body was so irregular that any blue jeans would be an imperfect fit. His seventeen years hadn’t been easy. When Kevin was three, his parents separated. His mother, Shelly, got a job as a waitress at a local delicatessen and embarked on a series of new relationships. Every time Kevin started to get close to a new father, the man disappeared. Kevin’s real father was seldom in touch; he remarried and had another son, athletic and good-looking. During Kevin’s junior high school years, just as he was getting settled into a new school, the family moved. It wasn’t surprising that Kevin looked to the telephone for solace.

Susan and Kevin didn’t get along from the start. Kevin had no use for Susan, and Susan saw him as a hulking menace with none of Roscoe’s charm. What was more, he seemed to have a malicious streak that she didn’t see in Roscoe. This curiously oafish friend of Roscoe’s always seemed to be busy carrying out revenge of one sort or another, cutting off someone’s phone service or harassing people over the amateur radio. At the same time, Kevin was a master of the soothing voice who aimed at inspiring trust, then cooperation. Kevin used his silken entreaties to win over even the most skeptical keepers of passwords. And he seemed to know even more about the phone system than Roscoe. Kevin’s most striking talent was his photographic memory. Presented with a long list of computer passwords for a minute or two, an hour later Kevin could recite the list verbatim.

Hafner, Katie; Markoff, John. Cyberpunk: Outlaws and Hackers on the Computer Frontier (p. 26). Touchstone/Simon & Schuster. Kindle Edition.

Susan was likewise a product of a shattered family life, quickly growing to a statuesque six feet and early on gaining success as a prostitute. One thing she shared with Kevin and others of the gang was strength in social engineering. A key to success for these early break-in artists was the ability to talk themselves into safely-guarded systems and to cajole others into surrendering secret passwords.

Susan liked to illustrate her belief with the following scenario: Take a computer and put it in a bank vault with ten-foot-thick walls. Power it up with an independent source, with a second independent source for backup. Install a combination lock on the door, along with an electronic beam security system. Give one person access to the vault. Then give one more person access to that system and security is cut in half. With a second person in the picture, Susan said, she could play the two against each other. She could call posing as the secretary of one person, or as a technician in for repair at the request of the other. She could conjure dozens of ruses for using one set of human foibles against another. And the more people with access the better. In the military, hundreds of people have access. At corporations, thousands do. “I don’t care how many millions of dollars you spend on hardware,” Susan would

Hafner, Katie; Markoff, John. Cyberpunk: Outlaws and Hackers on the Computer Frontier (p. 61). Touchstone/Simon & Schuster. Kindle Edition.

Kevin early gained notoriety as a ham radio abuser. This attracted the attention of Roscoe, leading to a collaboration that came to be called the Roscoe Gang. It comprised Roscoe, Kevin Mitnick, Susan Thunder, and ultimately Lenny DiCicco. The early interest of Kevin and Roscoe was phone phreaking:

Phreaking is a slang term coined to describe the activity of a culture of people who study, experiment with, or explore, telecommunication systems, such as equipment and systems connected to public telephone networks. The term phreak is a sensational spelling of the word freak with the ph- from phone, and may also refer to the use of various audio frequencies to manipulate a phone system. Phreak, phreaker, or phone phreak are names used for and by individuals who participate in phreaking.

The term first referred to groups who had reverse engineered the system of tones used to route long-distance calls. By re-creating these tones, phreaks could switch calls from the phone handset, allowing free calls to be made around the world. To ease the creation of these tones, electronic tone generators known as blue boxes became a staple of the phreaker community, including future Apple Inc. cofounders Steve Jobs and Steve Wozniak.

What is most ironic is that Apple Computer, a company notorious for initiating lawsuits over copyright infringement, was started by two individuals engaged in this parallel, illegal, enterprise.

From stealing time from the telephone company, Kevin migrated into computer intrusion. At this he became famously adroit, a prime tool being his aforementioned social engineering skills. What eventually brought Kevin down was his vituperative mindset, the same that gained him attention in ham radio circles. He invested enormous enterprise and took great satisfaction in rendering unto those he considered had done him wrong or had otherwise disparaged him. When he screwed over Lenny, Lenny returned the kindness by dropping a dime on Kevin. I cringe at the term, now completely obsoleted by the advent of modern telephone systems. The curtain fell this way:

Kevin was taken completely by surprise. The broad grin on Lenny’s face left him confounded. The FBI agents jumped out of their cars and shouted to Kevin that he was under arrest. They demanded that Kevin put his hands up and lean against the car. Kevin laughed a tight little laugh. “You guys aren’t from the FBI. Show me your folds.” Six large FBI identification folds emerged.

Kevin looked at Lenny, who was dancing in little circles and laughing. “Len, why’d you do this to me?”

“Because you fucked me over” came Lenny’s reply.

The agents hustled Kevin into one of the cars.

“Lenny!” Kevin cried out. “Could you call my mom and tell her I’ve been arrested?”

Ignoring the plea, Lenny turned to Chris Headrick and smiled. [Headrick] nodded approvingly. “You did so well you should be in my business.”

Hafner, Katie; Markoff, John. Cyberpunk: Outlaws and Hackers on the Computer Frontier (pp. 136-140). Touchstone/Simon & Schuster. Kindle Edition.

Pengo and Project Equalizer

It’s interesting how the word “equalizer” crept in. Hübner was born to parents who were just lucky to be in West Berlin (the non Soviet part) when DDR General Secretary Walter Ulbricht began to construct a permanent wall dividing East and West in August 1961. The East-West tension molded the mindset of many German nationals and other Europeans of the time, as well. The West, dominated by the United States, displayed enormous superiority in weaponry and technology, in general, as the Soviet Union struggled to recover from the ruins of war and chafed under ruinous authoritarian rule. As Hübner and his friends investigated ways of cracking Western computer systems and retrieving valuable data, they saw their activities as working to “equalize” the balance.

The idea was simple enough: they were hackers who could get into some of the world’s most sensitive computers. From those computers they could extract sensitive information, information they knew would interest the Soviets. What was more, they could provide the Soviets with some of the software they needed to catch up with the technologically more advanced West. Why shouldn’t the Soviets want to do business with them? Of course it was illegal. They all knew that. But in selling the Russians military and scientific information, they argued, they would be doing their part for world peace. A name for the project? Equalizer.

Hafner, Katie; Markoff, John. Cyberpunk: Outlaws and Hackers on the Computer Frontier (p. 173). Touchstone/Simon & Schuster. Kindle Edition.

Hübner adopted Pengo from a heroic video game penguin, who pushed blocks of ice about to defeat adversaries. Of all the Equalizer group Pengo was the one who caught the attention of authorities when he his associate Markus Hess cracked into computers at Lawrence Berkeley Laboratory. An astronomer then working as a computer system manager, Cliff Stoll, spotted the intrusion serendipitously:

One of his first assignments seemed simple enough: to reconcile a small accounting error that had shown up. LBL used some home-brewed accounting software, and the patchwork of programs, written by summer students over the years, had come up with a seventy-five-cent discrepancy between the normal system accounting and the lab’s own charging scheme. Cliff stayed at work until midnight puzzling over the mysterious seventy-five-cent error, which he suspected might be a computational rounding error.

After careful examination, he discovered it wasn’t a rounding error, but the work of an unauthorized person from outside the lab using the account of an LBL researcher who had left several months earlier. With characteristic gusto, Cliff became a self-appointed one-man SWAT team. He set up traps that captured the hacker’s every keystroke on a printer and alerted him every time the intruder was in the computer. He kept a detailed logbook, and he wrote a software program that tripped his pocket pager whenever the trespasser logged on. Before long, he was doing little else but tracking the uninvited guest. Occasionally he even slept in his sleeping bag on his office floor to keep a constant vigil over the hacker.

Hafner, Katie; Markoff, John. Cyberpunk: Outlaws and Hackers on the Computer Frontier (p. 170). Touchstone/Simon & Schuster. Kindle Edition.

Stoll has written The Cuckoo’s Egg, a book detailing his weeks-long hunt for the intruder. A review will be on-line later this year. Stoll ultimately trapped Hess using a device concocted by his girlfriend:

It was Cliff Stoll’s girlfriend, Martha Matthews, who came up with a brilliant ruse to catch the intruders. Martha was a twenty-four-year-old Berkeley law student headed for a Supreme Court clerkship, her calm bearing an ideal counterweight to Stoll’s manic edge. If this rogue was so persistent in his pursuit of military data, she argued, then they should use his insatiable appetite to trap him. The idea was to round up volumes of government data, disguise it as secret military information, plant it in the LBL computer as bait, then entice the hacker by naming the false files something irresistible like “SDInet.”

Hafner, Katie; Markoff, John. Cyberpunk: Outlaws and Hackers on the Computer Frontier (pp. 190-191). Touchstone/Simon & Schuster. Kindle Edition.

SDI in those days stood for Strategic Defense Initiative, a Reagan administration program, since much derided, to counter ICBM attacks from space. A Soviet spy would definitely be interested in this stuff. And Hess was interested:

Stoll set up the SDInet file so that only he and anyone posing as a system manager would have access to it. The next step was to sit back and wait for the intruder to log on.

A few days later, the hacker was back for a routine cruise of the LBL system. Within minutes, he noticed the SDInet file. And sure enough, he stayed interested for more than an hour. Soon thereafter, Stoll got word that the trace had been completed to a certain residence in Hannover. But he wasn’t given more details, and certainly not the hacker’s name.

Then, as if to provide positive proof that espionage was involved in this hacker’s activities, a few months later, well after the January 30 cutoff date, the lab received a letter addressed to Barbara Sherwin. The stationery letterhead said Triam International in Pittsburgh, Pennsylvania. The author of the letter was one Laszlo Balogh, and he asked for specific classified information that had been listed in the bogus SDInet file. Stoll decided that Laszlo Balogh must have had some connection with the hacker, since Stoll and the hacker were the only two people in the world who could get at the SDInet file. Stoll’s first call was to the FBI. He was told to find a glassine envelope, presumably to preserve fingerprints, and mail the letter at once to FBI headquarters.

Hafner, Katie; Markoff, John. Cyberpunk: Outlaws and Hackers on the Computer Frontier (pp. 191-192). Touchstone/Simon & Schuster. Kindle Edition.

The ultimate trace identified Hess and resulted in the downfall of Project Equalizer. As it turned out, the group had never obtained classified data. Much of which they sold to their Soviet contact in East Berlin, Sergei, was material that could be obtained on the open market, and cheaper. Stoll went to Germany to testify at the trial:

In his conclusions to the court, presiding judge Spiller said he believed the hackers had indeed sold information out of military computers to the KGB, and that the KGB had probably found the information very interesting. But, he added, Sergei couldn’t have seen it as terribly valuable because he didn’t yield to the hackers’ demands for a million marks. In the end, all that hacker know-how went unappreciated, even by the Soviets.

Hafner, Katie; Markoff, John. Cyberpunk: Outlaws and Hackers on the Computer Frontier (p. 250). Touchstone/Simon & Schuster. Kindle Edition.


In contrast to the other hackers featured in the book, Robert Tappan Morris grew up in a nurturing environment, one of three children of highly-rated computer scientist Bob Morris and his wife Anne, a Music graduate of Bryn Mawr College. After a highly successful career at Bell Laboratories, Bob Morris moved on to work computer security at the secretive National Security Agency. Early on Robert T. Morris exceeded expectations, and the sky seemed to be the limit for him.

Much in contrast with Kevin Mitnick and Markus Hess, Robert Morris was completely absent of malice. His crime was no less earth-shattering:

Phil Lapsley, an engineering student at the University of California at Berkeley, was puzzled. No sooner had he logged in to a Sun Microsystems workstation than it was clear something was amiss.

Computers such as the Sun run dozens of programs at once, so it is routine for people like Lapsley who maintain them to peek periodically to see which programs are currently active. But on November 2, 1988 he saw, hidden among dozens of routine tasks, a small program controlled by an unusual user named daemon. Daemon is not the name of any particular human, but an apt label conventionally used for the utility programs that scurry around in the background and perform useful tasks. But this program was not one that Lapsley recognized.

“Is anyone running a job as daemon?” he asked the others in the “fishbowl,” room 199B at the Berkeley’s Experimental Computing Facility. People shook their heads. Then somebody else in the room pointed to one of the screens, where a program that monitored the status of various other computers in the department was displayed. Lapsley looked more closely and discovered that a number of people appeared to be trying to log in to other Berkeley computers. He decided it must be an attempted break-in. At least once a year, someone tried to break into the computers in Cory Hall, which houses the school’s prestigious electrical engineering department. The school year wouldn’t be complete otherwise.

Hafner, Katie; Markoff, John. Cyberpunk: Outlaws and Hackers on the Computer Frontier (pp. 253-254). Touchstone/Simon & Schuster. Kindle Edition.

A horrific night was just beginning. A graduate student at Cornell University, Robert Morris was experimenting with a self-duplicating, self-spreading computer worm. On the evening of 2 November 1988 he set it loose on a lab system and went to dinner. But a coding mistake gave his creation powers Morris did not intend, and it became a Frankenstein monster out of control on computers connected to the Internet. Although the worm (Morris called it a virus) did no damage to computer files, its consumption of processor resources and its relentless attempts to crack into more systems quickly brought down in the order of 6000 systems. The damage done was in the form of lost productivity of the systems infected and the hours of work required to restore the systems.

A reporter at The New York Times eventually identified Robert Morris as the perpetrator:

The anonymous caller to The New York Times on Thursday afternoon made it clear that he didn’t want to disclose who had written the Internet virus. He just wanted to let the Times know that the person who had written it was a well-intentioned soul who had made a terrible mistake in the code.

The switchboard first routed the call to the paper’s national news desk.

“Uh, I know something about the virus that’s going around,” said the caller.

“What virus?” The editor sounded confused.

“The computer virus that’s crashing computers all over the country.”

“Give me your number and someone will call you back,” said the editor.

The editor gave the message and a telephone number to John Markoff, the paper’s computer reporter. Markoff had already heard about the incident. He had received a call at 10: 00 that morning from Cliff Stoll, the Berkeley astronomer who had gumshoed his way to the bottom of the West German hacker case a year earlier. Stoll, who was now working at the Harvard-Smithsonian Center for Astrophysics, told Markoff he had been up the night battling the program, which had swamped fifty of the center’s machines. The reporter then spent the morning calling universities and research centers to see if they, too, had been infected. One of his calls was to an occasional contact at the National Security Agency. Markoff had called the NSA in the past on security-related stories, and he thought his contact there might tell him something about what was going on. But his contact wasn’t there and his call wasn’t returned.

Hafner, Katie; Markoff, John. Cyberpunk: Outlaws and Hackers on the Computer Frontier (p. 260). Touchstone/Simon & Schuster. Kindle Edition.

As it was, Markoff’s contact at NSA was Bob Morris. Eventually, when Markoff identified Robert Morris as the perpetrator and noted the same last name, Bob Morris acknowledged the culpability of his son.

Prosecutors convinced Federal District Judge Howard Munson to disregard the absence of malice. The crime for which Robert Morris was charged and convicted was the intrusion itself, only recently classified as a crime. Early on, computer hacking, more properly, computer intrusion was considered a sport among enthusiasts in the new technology.

With the more recent advent of malicious intent and actual damage computer intrusion has ceased to be viewed as a sport. For those who consider they are doing a service by highlighting flaws in security, consideration should be made of a comparison. Suppose you have skimped on the key lock to your house, and some intruder makes use of this lapse and uses something like a bumping key to gain admission. He enters, doesn’t break anything, does take anything, and then leaves. It’s the same as computer intrusion.

That was all over 25 years ago. More recently the likes of Edward Snowden are considered heroes to some. He did expose a hole in national security, which hole may still be vulnerable. For this he gets no reward and is still on the hook for violating an agreement he signed up for when taking his job at an NSA contractor.

What is noteworthy of Snowden’s success mirrors a recurring theme in the book. Snowden did not have access to the material he stole. He conned a co-worker, who did have access, into giving him access to the system holding the files. Especially, Kevin Mitnick made great use of personal skills in obtaining access. Often spoofing a bona fide worker, he would phone up and be given access by an unsuspecting account user.

To get onto Dockmaster, Kevin had found the name of someone outside of the NSA with a guest account. Posing as a technician at an NSA computer center, Kevin had telephoned the computer center, Kevin had telephoned the legitimate user and said he was issuing new passwords and needed some information: name, telephone and current password. It was an old trick that Kevin and Roscoe had refined together, and it usually worked like a charm.

Hafner, Katie; Markoff, John. Cyberpunk: Outlaws and Hackers on the Computer Frontier (p. 79). Touchstone/Simon & Schuster. Kindle Edition.


Other times he would just wander into a computer center, show an innocent face, and gain access.

Weak passwords abound in this story. I used them early on in my career. Purple was a popular password of mine. Modern users, annoyed at having to choose passwords that incorporate mixed upper and lower case, numbers, and special symbols, might take heed. Modern thieves have a way around this and have automated Kevin Mitnick’s social engineering. The technique is called phishing, spelled after the same fashion as phreaking. An email is sent asking you to change your password, which requires supplying your current password. Of course your real password does not get changed, and the crooks use your real password for their own use. Susan Thunder was right. As long as people are involved computer systems will never be completely secure.

Four years after the book came out Katie Hafner revisited the topic, and the latest edition has her epilogue. Kevin Mitnick did not reform, and following the completion of his sentence, he went back to his old ways. He was tracked down living in North Carolina and arrested again:

The records showed that the calls were coming from a local Netcom dial-in site in Raleigh. They were originating from a cellular telephone, hooked to a modem. As soon as possible, Shimomura was on a plane to Raleigh. By 1 A.M. on February 13, he was in the passenger seat of a Chevy Blazer driven by a Sprint cellular technician, his lap piled with scanning and homing equipment: a surveillance device he had rigged out of an Oki cell phone, a palmtop computer to control the Oki and the Sprint technician’s cellular scanner, which had a directional antenna for detecting signal strength, like a sophisticated geiger counter. Shimomura describes that part of the chase as trivial. “It’s like finding a lightbulb in the dark, or an avalanche beacon in the snow,” he said. “You walk toward where it’s brightest.”

Within thirty minutes, Shimomura had homed in on the Players Club apartments, a three-story complex near the airport. When he turned things over to the FBI to make the arrest, Shimomura advised the agents to move swiftly, to reduce the time Mitnick would have to destroy evidence. At 2 A.M. on February 15 the agents knocked on the door of apartment 202. It took Mitnick five minutes to open the door. When he did he demanded to see a search warrant. They had one, but for the wrong apartment. Prosecutors had called a federal magistrate to get a valid warrant, but the agents already were inside. Mitnick was under arrest.

Hafner, Katie; Markoff, John. Cyberpunk: Outlaws and Hackers on the Computer Frontier (p. 362). Touchstone/Simon & Schuster. Kindle Edition.

I have no current information on Pengo or Hess, but Kevin Mitnick has since been employed as a security advisor. Robert Morris is a tenured professor at MIT.

Despite being about the computer industry, the book was obviously composed manually and later converted to Kindle by mechanical means. Clues show up in failures of the process. A number of examples of transcription errors are obvious.

For example on page 140, within a distance of two inches of each other, are alternate spellings of the name Headrick (Head-rick). Apparently a paper page with “Headrick” broken over a line ending by a hyphen was scanned, and the pieces were not reconnected in the final product.

On page 42 a PDP-8 computer becomes a PDR-8.

On page 65 the strange construction “that the. computers” appears.

And a number of other places. Possibly the publisher will employ an avid reader to scan and fix a few of these bugs.

Security Mania

Home security has been a hobby of mine since I retired. Allow me to refer you to a previous post. An item I found particularly useful was this:

I was much impressed with the concept and decided to expand my system, but on the cheap. A more affordable option is the CDS-930L that doesn’t have tilt and rotate control, lacks some of the other features, and has less image quality. At $30 each from Amazon (free shipping and no sales tax), these are a real bargain.


As mentioned, I have two of these, and may add more. When I enable the feature each one sends an email to me when it detects motion—such as when a burglar walks into the field of view. The email carries six attachments: three images captured just prior to detecting motion plus the follow three. See the previous link.

Another feature is FTP capability. FTP is the Internet File Transfer Protocol, and it works like this. If a computer on the Internet is running an FTP server, and the computer receives a datagram designating port 21, then the FTP server is activated and handles the message. The message will initiate a dialog between the computer and the sender. The dialog will likely comprise one or more of the following:

  • Host computer requests a user ID.
  • Host computer requests a password.
  • Host computer verifies the user ID and the password, and continues the dialog.
  • Host computer executes any number of FTP commands from the sender—receive a file, save the file in a directory specified by the sender, delete a file, transmit a file from the host to the sender. And so on.

Computer guys already know this stuff.

Besides the ability to send images by email, the CDS-930L can send a stream of images to a specified FTP server, as often as one each second. Since the CDS-930L doesn’t have file storage, this is the only way to capture an image stream with one of these. A problem is, you need an FTP server somewhere on the Internet. One way I suppose you can get access to one is to subscribe to a cloud storage service, but that costs money, and I was looking for a way to do this on the cheap.

Cheap is a relative term, and for me this translated into a computer I have sitting on the shelf not doing anything. Many months ago I purchased a refurbished Dell Latitude E6400, and then I drove to Best Buy and obtained a 1 TB drive for it. I installed the new drive and then installed Ubuntu Linux on it. And that brings up the tech exercise of today—how to set up an FTP server on Ubuntu Linux.

I wisely elected not to build my own server, which would be possible. I have had for 20 years the excellent book by W. Richard Stevens (now deceased) titled UNIX Network Programming. The book explains how to construct all manner of network software, and there’s more. The code is available on the Internet. You don’t have to write it. Just copy, modify, compile.

Talk about cheap. There’s an even cheaper way. Ubuntu Linux (and other flavors, besides) comes with an FTP server. All you have to do is to activate it, and you’re on-line. Not being a Ubuntu professional, what I did was go to the Internet for help. Particularly, I went to a page on Krizna.com. Here are the instructions, modified by me:

How to setup FTP server on ubuntu 14.04 ( VSFTPD )

There are 3 popular FTP server packages available PureFTPD, VsFTPD and ProFTPD. Here I’ve used VsFTPD which is lightweight and less Vulnerability.

Step 1 » Update repositories .
$ sudo apt-get update

You execute the apt-get update command, preceded by “sudo,” or super user do. This executes the command as though you were a super user, which you must be to do this kind of stuff on Linux (or UNIX).

Step 2 » Install VsFTPD package using the below command.
$ sudo apt-get install vsftpd

You only need to do this if your installation of Ubuntu does not already have vsftpd. It turned out mine already had it, and when I attempted to install it I was told it was already installed, and nothing was done in response to the command. You can check to see whether you already have vstfpd by checking the /etc directory. See the following.

Step 3 » After installation open /etc/vsftpd.conf file and make changes as follows.
Uncomment the below lines (line no:29 and 33).
» Uncomment the below line (line no: 120 ) to prevent access to the other folders outside the Home directory.
chroot_local_user=YESand add the following line at the end.
allow_writeable_chroot=YES» Add the following lines to enable passive mode.

Step 4 » Restart vsftpd service using the below command.
$ sudo service vsftpd restart

There are some additional steps. See the Krizna page to see more. I did not need to do any of this, as explained below.

I next had to open up my Linksys router to determine the IP address of my E6400. It turned out to be, an address that needs to remain stable if this whole business is going to work. I pulled up a DOS window on my Dell Studio 17 and initiated the following dialogue:

$ ftp
Connected to
220 (vsFTPd 3.0.2)
User ( john
331 Please specify the password.
230 Login successful.

See, my E6400 was already set up with a user name and password, and that’s all FTP needed. I quit out of that, because this was only to confirm the FTP server was running and that I could access it. The next task was to set up a CDS-930L to send images by FTP.

I know the IP addresses of my two CDS-930Ls, and I entered one in a browser window to get to the camera’s control panel:


Here I have:

  • Set the host address to
  • Set the port number to 21
  • Entered my user ID (john)
  • Entered my password for the E6400
  • Specified the images from this camera are to go to the /TVRoom sub directory
  • Set passive mode
  • Selected Always so it will run forever
  • Specified one frame every two seconds
  • Selected Date/Time Suffix so no images will be overwritten
  • Told it to create a new sub folder for every hour.
  • Checked the box to enable this operation
  • Clicked the Save Setting button to get the operation started

There is another button below you can click to test operation. It will send a text message to the FTP server, and you can go to the server to see if the message arrived, or you can go to the camera STATUS pane to see if the FTP test passed.

Setting up the FTP server requires editing the configuration file, as shown above. You will need super user privilege to do this. What I did was use super user privilege to allow anybody (me) to change the file. I used the vi editor to edit the configuration file. If you are not familiar with vi, then you will need to use a friendly text editor on your Linux box. Note the following two lines added to the configuration file:


This specifies the range of ports the FTP server will use. Allowing 101 ports will allow as many as 101 connections at a time. I am only using one for each camera, plus one for remote management of the image database from my Dell Studio system.

To show what can be done, here is a dialog to retrieve an image from the database. The commands were executed from my Dell Studio:

user@Laptop64 /cygdrive/c/DVD/D-Link/images
$ ftp
Connected to
220 (vsFTPd 3.0.2)
User ( john
331 Please specify the password.
230 Login successful.
ftp> cd TVRoom
250 Directory successfully changed.
ftp> dir
200 PORT command successful. Consider using PASV.
150 Here comes the directory listing.
drwxr-xr-x 20 1000 1000 4096 Apr 15 23:59 20160415
drwxr-xr-x 12 1000 1000 4096 Apr 16 10:00 20160416
226 Directory send OK.
ftp: 132 bytes received in 0.00Seconds 132.00Kbytes/sec.
ftp> cd 20160416
250 Directory successfully changed.
ftp> dir
200 PORT command successful. Consider using PASV.
150 Here comes the directory listing.
drwxr-xr-x 2 1000 1000 114688 Apr 16 01:59 0000
drwxr-xr-x 2 1000 1000 110592 Apr 16 02:59 0100
drwxr-xr-x 2 1000 1000 110592 Apr 16 03:59 0200
drwxr-xr-x 2 1000 1000 118784 Apr 16 04:59 0300
drwxr-xr-x 2 1000 1000 122880 Apr 16 05:59 0400
drwxr-xr-x 2 1000 1000 110592 Apr 16 06:59 0500
drwxr-xr-x 2 1000 1000 110592 Apr 16 07:59 0600
drwxr-xr-x 2 1000 1000 118784 Apr 16 08:59 0700
drwxr-xr-x 2 1000 1000 110592 Apr 16 09:59 0800
drwxr-xr-x 2 1000 1000 69632 Apr 16 10:32 0900
226 Directory send OK.
ftp: 620 bytes received in 0.00Seconds 620.00Kbytes/sec.
ftp> cd 0900
250 Directory successfully changed.
ftp> dir
200 PORT command successful. Consider using PASV.
150 Here comes the directory listing.
-rw-r–r– 1 1000 1000 30663 Apr 16 10:00 DCS-930LB2016041609000101.jpg

I logged on and navigated down to the TV room directory. I listed the directory contents and saw that since I set thing up yesterday the CDS-930L had created two new sub directories, one for 15 April and one for 16 April, as instructed through the FTP setup page above. I next navigated down to today’s subdirectory and had a look around. Sub directories had been created for hours 0 through 0900. Good. Everything is running as hoped for. I navigated to the 0900 page and listed the images. And it listed several pages of images. I’m only showing one. FTP finished up and said it was through sending the directory.

226 Directory send OK.
ftp: 85956 bytes received in 0.15Seconds 569.25Kbytes/sec.
ftp> bin
200 Switching to Binary mode.
ftp> get DCS-930LB2016041609000101.jpg
200 PORT command successful. Consider using PASV.
150 Opening BINARY mode data connection for DCS-930LB2016041609000101.jpg (30663 bytes).
226 Transfer complete.
ftp: 30663 bytes received in 0.09Seconds 356.55Kbytes/sec.
ftp> quit
221 Goodbye.

As you can see, I ensured the FTP was in binary mode, because images are not text files. Then I commanded get (an image). And here is the image. It’s deadly dull. My TV room on a dark and stormy Saturday morning. Nothing to see here, folks. Keep on moving.


With a TByte drive I figure I can go on vacation for a month or more and collect an image every two seconds from each of two cameras and have room to spare. If I get notification from one of the cameras that somebody is prowling around my house I can, when I get home, get all the appropriate images from the computer drive and take them down to the police station. Provided nobody steals the computer. And that’s the deal. The computer needs to be in an inconspicuous place so it will still be there.

This only covers security against burglary. The setup can be used for other purposes. In our neighborhood we have a single street leading in and out. We have a sign at the entrance announcing video surveillance is operating. The problem is, we don’t actually have any video surveillance. This would be a cheap way to implement neighborhood surveillance. Just point a camera out a window and let it run all the time. Something bad happens, and the police need to be called, we check the video record to see who was driving on the streets.

Feedback is welcome. Add a comment or send me an email.

The Undecidability Problem


I’m confident that nobody who has gone through a computer science degree program is unaware of the undecidability problem and the Turing Machine. Some excerpts from the Wikipedia entry sum up the matter:

In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is known to be impossible to construct a single algorithm that always leads to a correct yes-or-no answer.


In computability theory, the halting problem is a decision problem which can be stated as follows:

Given the description of an arbitrary program and a finite input, decide whether the program finishes running or will run forever.

Alan Turing proved in 1936 that a general algorithm running on a Turing machine that solves the halting problem for all possible program-input pairs necessarily cannot exist. Hence, the halting problem is undecidable for Turing machines.

In full disclosure, I took the course and made an A, but the matter is still hazy to me, and it’s not something I have lost a lot of sleep over. What did interest me was Alan Turing’s work in cracking the Enigma code used by the Germans in World War Two, and that was my reason for obtaining the motion picture about these events, The Imitation Game. A review will be posted on the fourth of May, but in the meantime I obtained a copy of the book that formed the basis for the movie. It’s Alan Turing: The Enigma, and it’s more about Turing and less about the German Enigma. The image above is from the Kindle edition of the book, and it appears to have been taken from the movie. The book also has a photo of an Enigma.


The naval Enigma machine with its lid raised to show the four rotors. Hodges, Andrew (2014-11-10). Alan Turing: The Enigma: The Book That Inspired the Film “The Imitation Game” (p. ix). Princeton University Press. Kindle Edition.

Andrew Hodges has delivered a masterful, some might say exhaustive, volume on the life and works of British mathematician Alan Turing. The book came out in 1983 after two years of presumably solid writing. I will only touch on some high points.

With his description of what is now called the Turing Machine, Alan Turning is often pointed out as the father, or else the godfather, of the modern computer industry. Others before and after made huge contributions. Charles Babbage, another Englishman, laid out the concept of a mechanical device that is the true forerunner of what we now call a computer, and Ada Lovelace, daughter of English poet George Lord Byron and also assistant to Babbage, is considered to be the first computer programmer. A computer programming language (Ada) is named after her. Arguably the first working computer is attributed to John Vincent Atanasoff, although the Atanasoff-Berry Computer was not programmable. Turing’s concept of a programmable computer is given credit for cracking the code of the German Enigma.

Turing’s early life does not give the picture of somebody about to break thresholds in mathematics and signal counterintelligence. As a child he was an uninspired student who at a point in his academic spiral showed a gift for mathematics. This earned him a place at Cambridge King’s College, and Hodges gives his American readers a guided tour of the British system of matriculation, another matter I have yet to comprehend.

A central part of the book is how Alan Turing’s interest in a peculiar field of mathematics set him onto a course to take on the German war machine. I’m familiar with some of the background, but I’m going to take care not to bore you with a lot of depth. Suffice it to say that in the early 20th century there were items of interest being fiercely pursed in one small corner.

In 1900 German mathematician David Hilbert set forth a number of mathematical issues that needed to be resolved. One statement of his was

that every definite mathematical problem must necessarily be susceptible of an exact settlement …

[Hodges, Andrew (2014-11-10). Alan Turing: The Enigma: The Book That Inspired the Film “The Imitation Game” (p. 118). Princeton University Press. Kindle Edition.]

In Vienna in 1931 Kurt Gödel

demonstrated that any non-contradictory formal system, which was comprehensive enough to include at least arithmetic, cannot demonstrate its completeness by way of its own axioms

thereby refuting Hilbert’s assertion. What Alan Turing did was to use his concept of a computing machine as a means for computing all possibilities of  “computable numbers.” He demonstrated the Turing Machine could not compute all numbers, therefore there must be a solution to some problem statement that could not be computed.

The major theme of this book is how the Turing Machine concept led to the cracking of the Enigma code. The German Enigma was not a secret machine. Prior to the war commercial copies had been sold and used by businesses wishing to keep their correspondence private. Particularly, the German Navy used a modified version. Enigma worked by translating a character entered from the keyboard into a different character, displayed by a lighted position in an array of letters. Additionally, every time a key was depressed, a set of rotors would advance and produce a different connection between the keyboard and the display. The German Navy upgraded the operation of their Enigma devices throughout the war by adding more rotors and patch-plug connections.

Prior to the war the British became concerned about being able to read German signal traffic in the event of hostilities, and they worked with the Poles, who had obtained a purloined copy of a military Enigma and had developed a way to crack the code. Bare days before the Germans invaded Poland the Polish copy and accompanying cryptanalysis technology were spirited out and into British hands.

The movie shows Alan Turing being interviewed for a position on the cryptanalysis team by Commander Alastair Denniston, director of the British Government Code and Cypher School (GC and CS). Denniston was a real character, but the interview is a fake. Turing began working with GC and CS prior to the war and never was a reluctant candidate. Neither was he the abrasive co-worker who deigned to work with others as depicted in the movie.

The connection between the Turing Machine, none of which was ever constructed, and the cracking of the Enigma is that the concept of the Turing Machine is an automaton that can be adapted to perform a multiplicity of tasks by modifying its collection of control states. The concept of a collection of states has since morphed into what we now call a computer program, and it was the idea of constructing a machine controlled by a variable program that enabled the defeat of Enigma.

The team at Bletchley Park, where the British carried out cryptanalysis during the war, was able to crack Enigma codes by feeding message intercepts into an automaton devised by Turing and trying different rotor settings until the output contained legible verbiage. What assisted them throughout the war was incompetence on the part of the German operators. The careless inclusion of expected words allowed the code breakers to vastly narrow their search. Turing’s variable control set allowed adapting the search without reconstructing the machine whenever the Germans made a change in their Enigma.

A parallel theme that runs throughout the book is Turing’s homosexuality. As a boy he realized he found male bodies more attractive than females, and the book is a life-long tale of Turing dealing with his homosexuality. In England at the time homosexual acts were criminal, but nothing came from Turing’s homosexual lifestyle until years after the war, even though he never went to lengths to hide his orientation. Remarkably, Turing was throughout his life almost fatally honest, and he was casual in disclosing his homosexuality to co-workers and friends.

Someone watching the movie may get the idea that Turing’s arrest in 1952 (the movie has it 1951) for “gross indecency” with a male lover was his downfall. This was not the case. Turing admitted all details once the facts were reported to the police, and he and his partner entered guilty pleas at their trials. Turing did not lose his job at the University of Manchester. He served a one-year probationary sentence, which required he receive estrogen treatments to effect chemical castration. He continued to pursue his new interests in scientific research up until 7 June 1954, when he apparently dipped an apple in cyanide solution and bit into it. He left no suicide note.

Hodges introduces the term “imitation game,” the title of the movie. It has a double meaning. On the one it’s the science of cryptology, the masking of messages. On the other it’s Turing living and working as a homosexual in plain sight within a society that criminalizes his lifestyle.

Hodges takes what could otherwise be a great spy thriller and turns it into an English prose tour de force. His analysis in depth is little matched by the likes of Gustave Flaubert and D.H. Lawrence. For example:

He had clung to the simple amidst the distracting and frightening complexity of the world. Yet he was not a narrow man. Mrs Turing was right in saying, as she did, that he died while working on a dangerous experiment. It was the experiment called life – a subject largely inducing as much fear and embarrassment for the official scientific world as for her. He had not only thought freely, as best he could, but had eaten of two forbidden fruits, those of the world and of the flesh. They violently disagreed with each other, and in that disagreement lay the final unsolvable problem. In this sense his life belied his work, for it could not be contained by the discrete-state machine. At every stage his life raised questions about the connection (or lack of it) between the mind and the body, thought and action, intelligence and operations, science and society, the individual and history. But these were questions on which, except in the most special ways, he went out without a word of comment. Russell and Forster, Shaw and Wiener and Blackett held forth on such subjects; Alan Turing played the humble pawn.

Hodges, Andrew (2014-11-10). Alan Turing: The Enigma: The Book That Inspired the Film “The Imitation Game” (p. 659). Princeton University Press. Kindle Edition.

That, in combination with stringent attention to detail, runs to 663 pages of narrative. Following are 63 more pages of notes and index. It’s a lot to digest, and the casual reader will at times wish for a Cliff’s Notes. As a research into the life and works for Alan Turing it may be unmatched.

Digital Display Color Mapping


This was over 20 years ago, and it was sort of a fun project. We weren’t working with Windows. I think it was UNIX or maybe VAX VMS. Anyhow, we were using bit-mapped images, and we had this color graphics display, and the figure above illustrates how images were formed. Each image pixel was formed using three color dots, red, blue and green. Full disclosure here. Color displays may double up on one of the colors.

Anyhow, a color pixel can be formed by three color shapes grouped closely together. To make green, for example, you just activate the green dot and turn off the red and the blue. To make white you turn on all three. To make black you turn off all three. You can make intermediate colors by setting the brightness of individual dots.

Suppose for each dot you have 256 intensity settings. Black is zero, and maximum brightness is 255. Now, to form an image using all possible colors you need three octets (eight bits each), and you would have 24-bit color. If you wanted to store this image in computer memory you would need 24 bits for each pixel, and for a large image that would be a bunch of bytes for each image. What you do is reduce the image file size by incorporating color palettes. Here’s how it works (and how we did it).

Our display incorporated a CRT, so each pixel was scanned 30 times a second (could have been 60). For each pixel the three electron beams were set according to the required color intensity. Only the beam intensity was not set directly by the bits in the image file. They were set by the output of a color palette generator. The palette generator was a chip that accepted input from the image file (using the color palette) and outputting the beam intensities. It worked like this:





Each pixel was stored in the image file as one byte (8 bits). To display the image you transfered the file contents to an image memory in the display controller. The display hardware continually scanned the stored image, sending each byte in succession to the palette generator. The palette generator had 256 registers of 24 bits each. The 8 bits of the image file were used as an address to select one of the 256 registers. Each register contained the 24 bits needed to display the color of the pixel (8 bits for each color).

Obviously with 24 bits available for each displayed pixel you have to be picky about which colors you actually display. Usually what you do is analyze the image (with the computer) and determine which colors are the best fit. Then you program each of the selected colors into one of the registers, and map each image color to one of the palette colors. Then, instead of storing all of the true image colors in the image file, you store the map addresses. You can also store the mapping (register contents) in the image file, and when you display the image, just read the register settings and set them into the palette generator.

Back 25 years ago all of this was a lot of fun and made work a little more interesting.

Applied Cryptography

This is a continuation of a previous post.

I subscribe to Scientific American, have for maybe 50 years. I saved this issue:


Yes, it’s all about corals. No, there’s an additional item:

Page 146

Page 146

Martin Hellman and Whitfield Diffie published a paper “New Directions in Cryptography” in IEEE Transactions on Information Theory (November 1976).

Two kinds of contemporary developments in cryptography are examined. Widening applications of teleprocessing have given rise to a need for new types of cryptographic systems, which minimize the need for secure key distribution channels and supply the equivalent of a written signature. This paper suggests ways to solve these currently open problems. It also discusses how the theories of communication and computation are beginning to provide the tools to solve cryptographic problems of long standing.

Today people may find this curious, but 40 years ago the prospect of e-mail was just looming on the horizon. For those not old enough to remember, what we had for e-mail in olden days was called the telegraph. If you wanted to send somebody a message you went to Western Union and gave them all the text, and they sent your message by wire. Or by radio link. Then somebody printed out the message on the other end, and a guy on a bicycle delivered it to your door. I have received one or more of those.

The hot new concept of e-mail was that a person could sit at his own computer at home or else at his boss’s computer at work, compose a message, enter the recipient’s e-mail address, and hit the send button. Not many people were doing that 38 years ago.

But suppose you want to keep your e-mail contents private. No problem. You just run your encryption program against your e-mail text, and you send the encrypted text. The recipient on the other end will run the proper decrypting program against the message to extract the clear text. If he has the proper cypher key. You have to give him the key.

Suppose giving your recipient the key were not a problem. Now you want to send e-mail to hundreds of people. You have to give each one of them the same cypher key. Even if there were a secure way of doing this, things would begin to get a little shaky. It only takes one loose cannon (just ask Edward Snowden) among your long list of recipients, and your plan for secure messaging is blown.

This is what Diffie and Hellman sought to fix. It works like this:

You have two processes for handling messages. One encrypts messages, and the other decrypts messages. Call the two processes E and D for encrypt and decrypt. Now have all participants come up with their own pair of E and D (no two alike supposedly). Everybody publishes for all the world to see his E. Now we have all these published encryption processes which we will call E1, E2, E3, etc. Everybody keeps his D to himself, a secret. How are you going to use this system?

No problem. If you want to send a message to number 2, you get yourself a copy of E2 and encrypt the message. Then you send the encrypted message to 2, who has no problem extracting the clear text, because he has the only copy of D2. This process is described using the following notation:

D2(E2(m)) = m

In this notation m is the clear text message. The parenthesis mean “apply the operator (D or E) to whatever is inside the parenthesis.

What Diffie and Hellman did not do in their 1976 paper was to describe a way to implement this. In 1978 Ronald Rivest, Adi Shamia and Len Adelman (RSA) published  “A Method for Obtaining Digital Signatures and Public-Key Cryptosystsms.” They foolishly offered to send a copy of their paper to readers who requested one. I did not have e-mail at the time, but postal mail worked just fine:


An essential requirement of a public key system is that your everyday Edward Snowden should not be able to take E and derive D from it. The method described by RSA involves using pairs of very large prime numbers. Call a pair of these numbers p and q. Then

p x q = n.

The number n is not prime. It has only two factors, p and q. Now suppose each of p and q are 100 decimal digits long (or more). Then the length of n is 200 (or more). The RSA method uses p and q (and n) to produce e and d. Read the RSA paper, page 6. This involves some nice math, which I will not elaborate on here.

A user R can publish n and e, keeping d (and p and q) private. Somebody wanting to send R a message uses n and e to encrypt the message. R uses n and d to decrypt the message. Knowing n it is still very difficult to compute d, even if you know e. Computing d is tantamount to factoring n (into p and q). It is well known that the factoring problem is hard. Factoring n is only a bit less difficult than doing a search for p (or q), but it is not easy enough to make it feasible with present day computational facilities.

Practical cryptographic systems use keys many (>> 100) digits long and are considered to be secure. The RSA public key system is proposed as a method for securely distributing keys to users in the field. As of this writing the Wikipedia entry outlines some approaches to attacking the RSA public key system:

  • When encrypting with low encryption exponents (e.g., e = 3) and small values of the m, (i.e., m < n1/e) the result of me is strictly less than the modulus n. In this case, ciphertexts can be easily decrypted by taking the eth root of the ciphertext over the integers.
  • If the same clear text message is sent to e or more recipients in an encrypted way, and the receivers share the same exponent e, but different pq, and therefore n, then it is easy to decrypt the original clear text message via the Chinese remainder theoremJohan Håstad noticed that this attack is possible even if the cleartexts are not equal, but the attacker knows a linear relation between them. This attack was later improved by Don Coppersmith.

See also: Coppersmith’s Attack

  • Because RSA encryption is a deterministic encryption algorithm (i.e., has no random component) an attacker can successfully launch a chosen plaintext attack against the cryptosystem, by encrypting likely plaintexts under the public key and test if they are equal to the ciphertext. A cryptosystem is called semantically secure if an attacker cannot distinguish two encryptions from each other even if the attacker knows (or has chosen) the corresponding plaintexts. As described above, RSA without padding is not semantically secure.
  • RSA has the property that the product of two ciphertexts is equal to the encryption of the product of the respective plaintexts. That is m1em2e ≡ (m1m2)e (mod n). Because of this multiplicative property a chosen-ciphertext attack is possible. E.g., an attacker, who wants to know the decryption of a ciphertext c ≡ me (mod n) may ask the holder of the private key to decrypt an unsuspicious-looking ciphertext c′ ≡ cre (mod nfor some value r chosen by the attacker. Because of the multiplicative property c′ is the encryption of mr (mod n). Hence, if the attacker is successful with the attack, he will learn mr (mod n) from which he can derive the message m by multiplying mr with the modular inverse of r modulo n.

I have as yet not learned of any systematic way to attack the RSA key system. PGP stands for Pretty Good Privacy, and it incorporates public key technology:

While originally used primarily for encrypting the contents of e-mail messages and attachments from a desktop client, PGP products have been diversified since 2002 into a set of encryption applications which can be managed by an optional central policy server. PGP encryption applications include e-mail and attachments, digital signatures, laptop full disk encryption, file and folder security, protection for IM sessions, batch file transfer encryption, and protection for files and folders stored on network servers and, more recently, encrypted and/or signed HTTP request/responses by means of a client side (Enigform) and a server side (mod openpgp) module. There is also a WordPress plugin available, called wp-enigform-authentication, that takes advantage of the session management features of Enigform with mod_openpgp.

I have my copy of the RSA paper, and thanks to the remarkable progress the Internet has made since 1979 you can now get your own copy through the link I supplied above. I have scanned Hellman’s Scientific American article to a PDF, and I will send a copy to everybody who asks for one. By e-mail.

Applied Cryptography

Not strictly about cryptography, but closely related. This is about hash codes.


It was a long time ago. Maybe 52 years. I started as an engineering student at the university, and they wanted all of us to learn how to use the computer. Seriously, it was the computer. They had one. We all took the (non credit) course as part of freshman orientation.

You need to know right now that a large segment of the freshman engineering class had little to no interest in the computer. That meant that I and a few others (out of a class of hundreds) had the computer to ourselves. We spent a lot of our off time programming all sorts of stuff. And there was something about this computer. In fact, it was something about all computers in those days. There were no user accounts. And there were no passwords. You just sat down at the computer, and you were on.

Those were the good days. Later, working for a company, we obtained access to the university’s CDC-6600 through a phone link. It was shared access with user accounts. And passwords. I began to become acquainted with the concept of passwords. Later at another company we purchased shared access to another company’s computer. It was over a phone link using a printer terminal. You typed, and responses were printed. There was no screen. When you entered the password, the printer went back and x’ed out the password that had printed. Passwords had to be kept secret.

Subsequently there were VAX computers all around with multiple accounts on each, and I eventually ended up sharing a cube with a computer whiz who really knew what was going on inside. And he explained password implementation concepts. It made sense immediately, and I wondered that I had not thought of it myself. It goes like this:

The computer will not let you use its services unless you enter the correct password assigned to your account. So you enter the proper string of letters and digits. The computer must be comparing them with the expected string of letters and digits to determine you have entered the correct password. It must have a copy of the password stored somewhere.

Absolutely not. This would be a dangerous thing to do. If the password is stored somewhere, then some snoop, at least anybody with administrator privilege could copy the password and use it to spoof your login. Very bad. So your password is not stored.

What is done, instead, is to use the password string you have selected and use that string in a computation to produce a new string. That resulting string is all that is stored. Your password is never stored. When you enter your password, the computer keeps it in volatile memory only long enough to perform the computation. Then it discards it. The computer compares the result of the computation with the string it has stored and determines whether the results match.

But, you say, if I can discover the computation method, then I can just take the stored string and reverse the computation to obtain the user password. You could do that. If you could reverse the computation. What is used is a computation that is very difficult to reverse. Enter the concept of a hash function.

A hash function, in simplest terms, takes given value, a number or a string and scrambles it so badly the result is from all appearances completely random. In a useful implementation a hash code performs a “many to one” mapping. Here’s a short explanation of “many to one.” See the diagram:


The two ellipses represent mathematical “spaces.” The difference in size of the two ellipses represents that one space is larger than the other. The larger space contains more points than the smaller one. Now you want to establish a relationship between points in A to points in B. Since A has more points, multiple points in A need to map to the same point in B. This is a “many to one” mapping.

What can you do with this concept?

Suppose you want to capture the essence of a large document. You want this essence to be small and portable, but you still want it to be unique to this document. A hash code is one way to capture the essence of a large data set.

Enter MD5.

The MD5 message-digest algorithm is a widely used cryptographic hash function producing a 128-bit (16-byte) hash value, typically expressed in text format as a 32 digit hexadecimal number. MD5 has been utilized in a wide variety of cryptographic applications, and is also commonly used to verify data integrity.

MD5 was designed by Ron Rivest in 1991 to replace an earlier hash function, MD4. The source code in RFC 1321 contains a “by attribution” RSA license.

[Some links removed]

The reader is invited to visit the link to RFC 1321. This is where I obtained my original code for MD5 shortly after it came out. “RFC” stands for Request For Comment, and it is the means by which the Internet Engineering Task Force (IETF) reviews, accepts and publishes standards for the Internet.

If you want to use MD5 on your computer you can obtain the code from RFC 1321 and compile it on your computer. For this exercise I’m using a copy obtained with Cygwin, an application that allows users to run a UNIX-like environment on Windows computers.

Here’s a demonstration. I obtained an article published in the newsletter of the North Texas Skeptics in October 2001. The item is wordy, and I won’t reproduce all of it here. I will exhibit one paragraph. The article is about creationists’ use of phony academic credentials, and I discussed creationist Kent Hovind. Among other things I covered the misuse creationists often make of protein sequence comparisons between different biological species:

Furthermore, comparison of sequences for the different organisms show what should be expected from evolution. Although cytochrome c performs much the same function in the different organisms it shows these differences due to random DNA copying errors during reproduction. As long as the resulting protein performs a useful (and required) function in the descendent organism, the descendent will thrive and reproduce, and the error will be retained in the subsequent lineage. The further along the line of descent a particular organism is the more accumulated change there will be. If a lineage branches, as during the formation of a new species, the chain of differences will diverge, as well. The result is that the accumulated differences between two living organisms marks the amount of change since the two lineages diverged.

Ignore, for a moment, my misspelling of “descendant” in the article.To perform this demonstration, I captured the text of the article into a text file, and then I executed MD5 with the file as input. Here’s what I got:

$ md5sum message.txt
54eb1c5086598e9f925bb3ec30c215f0 *message.txt

The first line is from my computer screen when I executed md5sum, the Cygwin MD5 implementation. The command line parameter is “message.txt,” the name of my text file. The second line is what md5sum printed out. It incorporated every character in the file, not just the text quoted above, and used them in the computation of a 32-digit hexadecimal value. It digested the entire file into 32 hexadecimal characters. And that’s how MD5 gets it’s name. “MD5” stands for “message digest 5.” This is the fifth version of Ron Rivest’s message digest utilities.

How is this useful?

Suppose somebody pays me to do some research and produce a lengthy report. I do all of this and produce a big write-up. I send my write-up to the person who paid me for the report. And I get a phone call. My client says he received the report, but some things do not look right. It’s possible somebody mucked with the report before passing it on to him. He may not have a legitimate copy.

Rather than having my client send me his copy of the report so I can go over it line by line, I ask instead that he just execute MD5 against his copy. This is assuming the original is an electronic file. My client does this and then reads back the 32-digit hash code to me over the phone. I compare what he tells me with my own copy of the hash code. It’s easy to do. There are only 32 digits to compare.

Let’s see what would happen if somebody did muck with the report. Here is another version of the previous quote. Only I have made a minor change. Can you find the change?

Furthermore, comparison of sequences for the different organisms show what should be expected from evolution. Although cytochrome c performs much the same function in the different organisms it shows these differences due to random DNA copying errors during reproduction. As long as the resulting protein performs a useful (and required) function in the descendent organism, the descendent will thrive and reproduce, and the error will be retained in the subsequent lineage. The further along the line of descent a particular organism is the more accumulated change there will be. If a lineage branches, as during the formation of a new species, the chain of differences will diverge as well. The result is that the accumulated differences between two living organisms marks the amount of change since the two lineages diverged.

I made the change in the message.txt file and ran md5sum against the changed file. Here is the result:

$ md5sum message.txt
ca27c1bb6311517eb3ed51129bec0804 *message.txt

For comparison, here are the two hash codes side by side:



A one-character change in the file made this difference in the hash code. Use of a hash function such as MD5 makes it very easy to detect minor differences between large documents. This works as well for binary images such as photographs.

I have a Canon digital camera, and it has a feature that embeds a hash code into each image. Canon keeps the details to itself, but the company provides a service, if needed, to validate images from their cameras.

Suppose I photograph a criminal act in progress, and later I bring my photo to court as evidence against the accused. The defense attorney challenges the validity of my photograph. I have Photoshopped his client’s license plate number onto the license plate of the getaway car in the photo. I say, here is the RAW image from the camera. Canon verifies this is indeed a RAW image from my particular camera, and furthermore it has not been altered. The criminal is off to the slammer, cursing hash codes all the way.

What makes it hard to defeat hash code verification? Two things. First the mapping is many to one. You could attempt to demonstrate the hash code in question actually represents a different data set. After all, this is a many to one mapping, so this hash code could have come from a different file. Hold that for a moment and go on to the next point. It is very difficult to back out a hash code computation. The emphasis is on the word “very.”

One way to back out the computation of a hash code is to find another data set that produces the same hash code. Good luck. A 32-digit hexadecimal number is a large value, and you really do not have time for the task of coming up with an alternative data set. The nature of hash code functions such as MD5 is that they offer no assistance to anybody attempting to come up with an alternative data set. There is no guidance in the search. The slightest change produces a complete scrambling of the hash code.

But back to the first point again. Suppose you did find (eureka!) an alternative data set. That data set would have to make sense. A nonsense strings of characters would not pass for the claim that the alternative text is valid.

My statement is a bit strong that backing out an MD5 calculation is problematic. But only a bit. That’s the conclusion of further research:

In 1996 a flaw was found in the design of MD5. While it was not deemed a fatal weakness at the time, cryptographers began recommending the use of other algorithms, such as SHA-1—which has since been found to be vulnerable as well. In 2004 it was shown that MD5 is not collision resistant. As such, MD5 is not suitable for applications like SSL certificates or digital signatures that rely on this property for digital security. Also in 2004 more serious flaws were discovered in MD5, making further use of the algorithm for security purposes questionable; specifically, a group of researchers described how to create a pair of files that share the same MD5checksum. Further advances were made in breaking MD5 in 2005, 2006, and 2007. In December 2008, a group of researchers used this technique to fake SSL certificate validity, and CMU Software Engineering Institute now says that MD5 “should be considered cryptographically broken and unsuitable for further use”, and most U.S. government applications now require the SHA-2 family of hash functions. In 2012, the Flame malware exploited the weaknesses in MD5 to fake a Microsoft digital signature.

[Some links removed]

All that aside, MD5 is useful for verification of data integrity internally. In a non-hostile environment, where people are not trying to crack your security, and you just want to verify you have the correct copy of your data set, then MD5 is careful, quick and kind. I have most recently used it to verify the software build I was submitting to automated test was the correct and uncorrupted version designated for the test. Try it. You’ll like it.

Discrete Event Modeling

Fourteen years ago I was working for a company that made telecommunications equipment. The company was based in Paris, France, but our offices were in Richardson, Texas, later Plano. Our location was fairly new. The company wanted to break into the American markets, and they also wanted to get more into the Internet side of the industry. The company was a world leader in optical technology, and a logical project was to design an optical router.

In the simplest terms, a router is a special kind of switch. The essentials are diagrammed in the figure below:



Data packets come in by the channels on the left. The router examines each packet and determines which channel on the right to send the packet. If there are many input channels and many output channels and the channels are optical, such as OC-192 (10 billion bits per second), then the router internals have a lot of work to do.

Our plan was called TIPOR, Terabit Internet Protocol Optical Router. The signals coming in would be optical packets. The path through the switch would be optical, the output channels would be optical. TIPOR would have optical switches to enable the operation. Some smart guys came up with a design proposal. We needed a way to simulate operation to verify the performance was feasible.

All of this was explained at a meeting about this time in 2000. I was sitting through the meeting and also thinking about the simulation problem. I recalled a similar situation I had in a previous job. We were designing a computer system for a smart weapon, and it needed to have horrendous performance. We were proposing multiple processors to pipeline the internal operations, and I needed to run simulations to verify feasibility. They told me I should use ADAS. One of the principle engineers on the project had been in on the development of ADAS, and she was sure it was the way to go.

ADAS was not the way to go. ADAS is a Petri net simulator, and Petri nets did not provide the functionality needed to simulate our multi-processor pipeline. I knew that if I could solve a particular software problem I could write computer code for the required simulation. Only I did not have time. My spare time was taken up by college courses for my physics degree, and I did not have any time left over to go off on a tangent. So I attempted to use ADAS. It was a complete failure. People were not happy with me.

So, two jobs later I already had my physics degree, and I figured I had enough time to work out the code necessary to simulate TIPOR. And as I sat in the meeting it came to me what I needed to do.

Presently a team of four of us got together with the goal of simulating TIPOR operation. Besides me, there were Hal Badt, Gerard Damm and Prasad Golla. Here’s the simulation concept we came up with:


This diagram does not show the router internal workings. That would require a much more complex diagram. This simplified diagram shows the essentials of what we needed to simulate.

On the left I show only one input queue. In reality there were 512 input queues. Data packets come in the optical channel on the left, but they don’t go immediately into the switch. That’s because:

  • There are other packets already in the queue that have not gone into the switch.
  • The switch is busy changing its setting—selecting the appropriate output queues to connect to the appropriate input queues.
  • The switch is busy transmitting packets from input queues to output queues.

What was needed to simulate were:

  • Unscheduled arrival of data packets into the input queue
  • Placing packets into the input (FIFO) queue and taking packets out to send to the switch
  • Actually simulating the switch settings to route packets to the appropriate output queues
  • Operation of the output queue

Implementing a FIFO queue in computer software is straight forward. You establish a fixed array of objects (could be numerical values or complex data structures). Then identify two variables, one pointing to the head of the queue and one pointing to the tail of the queue. Initially both pointers point to the first element in the array. If you want to insert an item into the queue you insert the item at the point indicated by the tail pointer and advance the tail pointer by one. If you want to remove something from the queue you advance the head pointer by one. You put in logic to make the pointers wrap around from the top of the array back to the bottom of the array. It’s called a circular queue buffer, and lots of computer operations use this method, and there are even chips that perform the process. You also put in checks to prevent attempts to remove something from an empty queue, and also flag when the queue becomes full and cannot accept any more items.


That handles the management of objects moving through the simulation, in this case data packets. This would not be a very interesting simulation if all you did was to flow items through. You also need to manage events. An event is some action to take when certain situations arise. In particular you want events to occur in correct chronological order. You need to initiate and maintain a time base for events. That is a little more elaborate, and that’s the problem I had to solve.

In my systems design course at the university Professor Ivor Page explained how to do this, but I only made a B in his course, so I had to dredge the implementation out of my mental garbage heap. Here’s what you need to do.

Establish a queue, as previously described, but it’s no longer FIFO. Items in the queue are events, and they are maintained in the queue in chronological order. Only future events are maintained. Once an event has occurred it’s discarded from the queue. When the simulation, based on something that has just happened, creates a new event, it computes the simulation time for the event. When that time arrives the event will be executed. Now what is needed is to insert the event into the event queue in correct chronological position. Usually this means the event will have to be squeezed in between two other events already into the queue. And that was the big problem to be solved.

Of course, many others had already implemented a solution well before my time, but I did not have access to the code. So I had to re-invent the wheel.

The obvious way to insert something into the middle of a queue is to emulate what happens when somebody breaks into line at the ticket window. Everybody behind him has to move back one. That’s not an efficient way to implement an event queue. There is a better way, and Dr. Page did not explain this to us in class. So here it is:


This is a simplified view of the concept:

  • Establish a static array of event items.
  • Each event item contains a pointer to the code to be executed for the event.
  • Each event item also contains the simulation time the event is to be executed.
  • Each event item also contains a pointer to the event next in line (in time).
  • Each event item also contains a flag that indicates whether this is the last event in the chain.
  • A variable is maintained pointing to the beginning of a pool of available array elements.
  • A variable is maintained pointing to the event that is to be executed.
  • To insert an event into the event queue, select the next available element from the pool and fill out the components of the event. Unlink the event from the pool, and search the existing event chain for the correct position (based on simulation time) to insert the event. Then insert the event into the event queue by changing the links.
  • To execute the next event in the simulation, select the event at the Current pointer and link its item back into the pool chain.
  • Use the pointer to the executable code from the event item to execute the code for the event. Use other components from the event item as parameters for execution of the event code.
  • Generally the execution of an event will involve generating another event to be executed in the future, and the code will insert the new event into the event queue.

I have omitted the ugly details of how all this is supposed to work, but I have the code if anybody wants to do discrete event simulation. For this is the essence of discrete event simulation.

This is different from continuous simulation. Continuous simulation attempts to mimic the operation of continuous processes. Such a process might be the flight of a missile under the influence of motor thrust, gravity and aerodynamic forces. Simulation of continuous systems relies on the principle that at fine grain effects are nearly linear. That allows breaking down a continuous operation into small steps (usually in time) and using the current state of the system to predict the state of the system at the next small increment of time.

Anyhow, we got this up and running in a few weeks. Gerard and Prasad worked out switch management logic, and I worked out the simulation of the router using the switch control algorithm, and we ran simulations with varying traffic loads and produced plots of the results. Prasad and I got so involved that when Christmas holidays rolled around we skipped them and ran simulations. Critical items of interest included how long it took packets to transit the switch and queue congestion.

Some will tell me (and have) that object-oriented, even ANSI, C provides a nearly transparent means for managing an event queue. This is accomplished by creating a new object for each new event, then discarding the object when the simulation is finished with it. Prasad tried this and reported the simulation took about 1000 times as long. Software overhead for creating and deleting objects is horrible compared to the method described above. Also, the practice of creating and deleting objects is fraught with danger. Problems can develop if the code is not careful to delete unused objects. “Memory leak” is the applicable term for this condition and is (my opinion) the reason your Windows OS crashes so much.

In a job interview the interviewer asked me how I would use the alloc and malloc functions in C, and I told him I never did that, because time-critical code cannot stand the overhead involved. I did not get the job.

TIPOR was never a successful product, but there were a number of offshoot patents, including a few from our simulations. Our boss, Francesco Masetti plus Gert Eilenberger and others published a paper describing the design. I found a copy of the paper and have provided it here for your reading pleasure.

The paper includes two plots generated from the simulation. This was, of course, included in the paper to demonstrate (by simulation) the wonderful performance of TIPOR.