Topic Actions

Topic Search

Who is online

Users browsing this forum: Google [Bot], Jonathan_S and 41 guests

Hacking 2000 years from now...

Join us in talking discussing all things Honor, including (but not limited to) tactics, favorite characters, and book discussions.
Re: Hacking 2000 years from now...
Post by ldwechsler   » Tue Oct 10, 2017 10:43 pm

ldwechsler
Rear Admiral

Posts: 1235
Joined: Sun May 28, 2017 12:15 pm

The Internet is not much more than 20 years old. That's one percent of the journey to the Honoverse. We're going into the assorted operating systems that are already archeological, perhaps neolithic.

We know there will be many changes in the future. I've read that the whole iPhone/Smartphone concept will be gone in ten years. We are being anachronistic here...expecting Windows 1000, perhaps. And, based on experience, that will be completely unusable as the system becomes more grotesque each new iteration.

This is not abnormal for scifi. Remember the communicators of Star Trek. Looked like flip phones, not close to being as good as our current phones. But they were recognizable. Same here.

So when we use our current or past systems we are way out of our League.
Top
Re: Hacking 2000 years from now...
Post by cthia   » Wed Oct 11, 2017 6:00 am

cthia
Fleet Admiral

Posts: 14951
Joined: Thu Jan 23, 2014 1:10 pm

ldwechsler wrote:The Internet is not much more than 20 years old. That's one percent of the journey to the Honoverse. We're going into the assorted operating systems that are already archeological, perhaps neolithic.

We know there will be many changes in the future. I've read that the whole iPhone/Smartphone concept will be gone in ten years. We are being anachronistic here...expecting Windows 1000, perhaps. And, based on experience, that will be completely unusable as the system becomes more grotesque each new iteration.

This is not abnormal for scifi. Remember the communicators of Star Trek. Looked like flip phones, not close to being as good as our current phones. But they were recognizable. Same here.

So when we use our current or past systems we are way out of our League.

Not exactly. The beginning of the internet was the Arpanet which began in 1969. The first two legendary nodes of the Arpanet began with the first node at UCLA and an early version of a hypertext system at Stanford which later formed the second node. Two more nodes were added at UC Santa Barbara and the University of Utah. This was the actual beginning of the internet. Although almost 50 years is still a drop in the bucket to the Honorverse, lets not steal important time and technology from the internets heritage.

Personally, I think that the only reason it's still not referred to as the The U.S. Defense Department's Arpanet (Advanced Research Project Agency Network, sometimes referred to as Darpanet) is because it was really Big Brother's funded research and Big Brother flexed its muscles. Which all began with a paper written at MIT and a few experiments.

My first computer language that remains my bread and butter was pulled off of a BBS in the heyday of the Arpanet. I needed a much more powerful programming language for my kit computer and Lisp was floating around on the Arpanet which was made available by some guru at UCLA. I pulled it off of the Arpanet during a trip to UCLA to visit my grandmother who was teaching there. Grandma was quite pleased that I vehemently wanted to go to California to visit her. But it was the only way I could grab the code because BBS's were not actually "a thing" then. Besides, she was the sweetest grandmother in history with a groundside swimming pool in a neighborhood of girls, girls, girls.

The free version of Lisp I initially used was embellished and offered in the late 70's as a free program for Unix. A Unix variant also began floating about the Arpanet. So grabbed it as well. Since I already had almost a decade of programming skills in Lisp by the late 70's, I was well ahead of the game. Lisp, a programming language that people were afraid of. And they still are. It's those frightening parenthesis. I was but a snotty nosed kid when I first grabbed Lisp from the first two nodes of the Arpanet and adapted it for my kit computer. By the late 70's I was running it on a Unix variant that was adapted as well. And, of course, who didn't own an Apple with its Unix based macOS? Lisp and Unix programming skills running on an Apple set me up like a fat cat drinking milk. The internet is based on Unix. I don't know where the net would be without Unix. Which bore Linux. I also don't know where the internet would be if AT&T hadn't licensed Bell Lab's original Unix to the outside world.

I do not expect Windows 1000 in our, or the Honorverse's, future. God knows I hope not. A futuristic version of Linux would be quite nice, though I don't expect it either. Though I do expect that whatever the OS, it will remain Turing complete running software that is the same. There are laws governing computing that I do not expect to be broken. Though I wouldn't mind too much if they are. Truth of the matter is, we are not meant to be gods or have the power of gods. We'll simply put our eyes out with a shiny new "Red Ryder" BB gun.

Comparing communications to computers is apples to oranges. They don't quite share the same limitations.

Son, your mother says I have to hang you. Personally I don't think this is a capital offense. But if I don't hang you, she's gonna hang me and frankly, I'm not the one in trouble. —cthia's father. Incident in ? Axiom of Common Sense
Top
Re: Hacking 2000 years from now...
Post by Daryl   » Wed Oct 11, 2017 6:36 am

Daryl
Fleet Admiral

Posts: 3488
Joined: Sat Apr 24, 2010 1:57 am
Location: Queensland Australia

I can't remember the title but there was a popular short story by Issac Asimov which had a theme that the first true space ship was one that could actually contain an entire real computer. Most of the interior was filled with vacuum tubes.
Top
Re: Hacking 2000 years from now...
Post by Imaginos1892   » Wed Oct 11, 2017 3:07 pm

Imaginos1892
Rear Admiral

Posts: 1332
Joined: Sat Mar 24, 2012 3:24 pm
Location: San Diego, California, USA

Ah, yes, the McGuffins of the Golden Age. I recall reading an A. E. Van Vogt story with hundred-foot-long slide rules, and vacuum tubes the size of zeppelins.

Robert A. Heinlein's The Door Into Summer had 'memory tubes' that were already obsolete when I read it in the 1970's. Although 'obsolete' is not really the right word.

We need a word for 'Neato gadget that we now recognize as impossible, unnecessary, or downright silly based on knowledge acquired since the story was written'.
———————————
What’s more dangerous than a polar bear? A bipolar bear!
Top
Re: Hacking 2000 years from now...
Post by pappilon   » Thu Oct 12, 2017 6:08 pm

pappilon
Rear Admiral

Posts: 1074
Joined: Tue Sep 05, 2017 11:29 pm

Imaginos1892 wrote:Ah, yes, the McGuffins of the Golden Age. I recall reading an A. E. Van Vogt story with hundred-foot-long slide rules, and vacuum tubes the size of zeppelins.

Robert A. Heinlein's The Door Into Summer had 'memory tubes' that were already obsolete when I read it in the 1970's. Although 'obsolete' is not really the right word.

We need a word for 'Neato gadget that we now recognize as impossible, unnecessary, or downright silly based on knowledge acquired since the story was written'.
———————————
What’s more dangerous than a polar bear? A bipolar bear!


Would make my day to see Microsoft go belly up.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The imagination has to be trained into foresight and empathy.
Ursula K. LeGuinn

~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Top
Re: Hacking 2000 years from now...
Post by cthia   » Sun Oct 15, 2017 7:10 pm

cthia
Fleet Admiral

Posts: 14951
Joined: Thu Jan 23, 2014 1:10 pm

Joat42 wrote:

If you read the article it says that the Kaspersky AV was exploited. Since you seems so sure, you do have all the facts - right? Not jumping to any conclusions - right?
cthia wrote:<He does not see because he is blind. He is blind because he does not see. Yet he has eyes. He is mind blind.>

Let's see... :roll: :roll: :roll:
Joat42 wrote:Which is a typical cop out answer in which you say 'I'm right and no matter facts there is I'm still right'. How very adult of you...


Oh my, did I call this thing from miles away. I've never used Russian software. It is absolutely ludicrous.

New N.S.A Breach Linked to popular Russian Antivirus Software

How Israel Caught Russian Hackers Scouring the World for U.S. Secrets.

The World Once Laughed at North Korean Cyberpower. No More.

I called this thing years ago! Years!

Son, your mother says I have to hang you. Personally I don't think this is a capital offense. But if I don't hang you, she's gonna hang me and frankly, I'm not the one in trouble. —cthia's father. Incident in ? Axiom of Common Sense
Top
Re: Hacking 2000 years from now...
Post by drinksmuchcoffee   » Sun Oct 15, 2017 11:24 pm

drinksmuchcoffee
Lieutenant Commander

Posts: 108
Joined: Tue Dec 09, 2014 11:51 am

A couple of thoughts.

Formal methods are great for identifying programming errors and some design errors. I'd argue that CASE tools that do flow analysis and code coverage analysis and better language design will substantially solve the same problem for less effort on the part of developers.

However, there is no way that you can imagine that such tools would eliminate all (or even most) bugs. Requirements documents usually have bugs. If your software interfaces with a number of other complex subsystems you can easily inadvertently produce bugs when the assumptions in one or more of those subsystems change. Even tiny and trivial changes can completely screw you.

From the Textev there was a case where Solarian missile defense systems would not target GA missiles because they were going too fast. How would formal methods catch such a bug? Bluntly, that was a reasonable assumption to make at the time the software was designed, but the external environment changed and it became a bug.

I'd argue that any advanced spacefaring civilization on a par with the Honorverse (and particularly a civilization with controlled fusion power) would have to have a far more advanced discipline of "reliability systems engineering" or "systemology" than we can now imagine. Such a discipline would no doubt have a great deal to say on how to write more reliable software.

I worked in the computer security field for a number of years, and still follow the literature. The depressing reality is that at this point there isn't really a general theory of security. Just a hodgepodge of best practices that have typically been learnt the hard way. Worse, when you look at such basic components of the field as encryption algorithms, there really isn't any theory of how to construct an encryption algorithm. Or even any sense of whether a given encryption algorithm will be "good" or not. You in general learn how to design and write encryption algorithms by painstakingly studying other encryption algorithms and trying to learn from their mistakes and also their successes. Then your code will probably need to be vetted for years by many other cryptologists before you have any hope of your algorithm being taken seriously. Writing encryption algorithms is probably closer to composing magic spells than any actual engineering. I don't know how formal methods would work on magic spells.
Top
Re: Hacking 2000 years from now...
Post by Jonathan_S   » Mon Oct 16, 2017 1:21 am

Jonathan_S
Fleet Admiral

Posts: 8269
Joined: Fri Jun 24, 2011 2:01 pm
Location: Virginia, USA

drinksmuchcoffee wrote:From the Textev there was a case where Solarian missile defense systems would not target GA missiles because they were going too fast. How would formal methods catch such a bug? Bluntly, that was a reasonable assumption to make at the time the software was designed, but the external environment changed and it became a bug.
At best formal modeling with well defined requirements could let you quickly catch the flip side of that; where you are moving too fast for the code to handle - see the Ariane 5 explosion where a well testing piece of code carried over from the previous rocket failed because nobody noticed the implicit speed limit built into it.
Good formal modeling could force you to make those limits explicit which at least makes it faster to check limit against new requirements or intelligence.
(Arguably the US WWII Mark 14 torpedo suffered from an real world physical equivalent - reused the well tested contact exploder off the previous (slower) torpedo as a backup for it's new fangled magnetic exploder, and they didn't realize the design had an effective impact speed limit, which was faster than the old torpedo but slower than the new Mark 14)

But as you said formal modeling can't help you if you specify software for anti-missile computers capable of handling missiles up to 0.6c and you're faced with missiles doing 0.8c. (Or spec it to handle 1,000 simultaneous missile targets and someone launches an alpha strike of 10,000)
Top
Re: Hacking 2000 years from now...
Post by cthia   » Mon Oct 16, 2017 12:13 pm

cthia
Fleet Admiral

Posts: 14951
Joined: Thu Jan 23, 2014 1:10 pm

Jonathan_S wrote:
drinksmuchcoffee wrote:From the Textev there was a case where Solarian missile defense systems would not target GA missiles because they were going too fast. How would formal methods catch such a bug? Bluntly, that was a reasonable assumption to make at the time the software was designed, but the external environment changed and it became a bug.
At best formal modeling with well defined requirements could let you quickly catch the flip side of that; where you are moving too fast for the code to handle - see the Ariane 5 explosion where a well testing piece of code carried over from the previous rocket failed because nobody noticed the implicit speed limit built into it.
Good formal modeling could force you to make those limits explicit which at least makes it faster to check limit against new requirements or intelligence.
(Arguably the US WWII Mark 14 torpedo suffered from an real world physical equivalent - reused the well tested contact exploder off the previous (slower) torpedo as a backup for it's new fangled magnetic exploder, and they didn't realize the design had an effective impact speed limit, which was faster than the old torpedo but slower than the new Mark 14)

But as you said formal modeling can't help you if you specify software for anti-missile computers capable of handling missiles up to 0.6c and you're faced with missiles doing 0.8c. (Or spec it to handle 1,000 simultaneous missile targets and someone launches an alpha strike of 10,000)


Interesting posts, guys.

If I might add. Failure in being able to handle the higher missile speeds may not necessarily be simply or just a function of software design, but may also be a design limitation of the hardware. My lacking the specifics of the design. Having said that...

At first glance a non-programmer might ask "Why would the programmers place such limits of 1000 missiles on the software?"

Part of the problem certainly could be short sightedness. After all, who would have thought in the early days of computing that the year 2000 problem would ever come to be a problem.

Short of short sightedness, however, there is the impact of memory and storage which was responsible for -- and formed the basis of -- the year 2K problem...

wiki wrote:The problem started because on both mainframe computers and later personal computers, storage was expensive, from as low as $10 per kilobyte, to in many cases as much as or even more than US$100 per kilobyte. It was therefore very important for programmers to reduce usage. Since programs could simply prefix "19" to the year of a date, most programs internally used, or stored on disc or tape, data files where the date format was six digits, in the form MMDDYY, MM as two digits for the month, DD as two digits for the day, and YY as two digits for the year. As space on disc and tape was also expensive, this also saved money by reducing the size of stored data files and data bases.



Assigning a variable to represent large numbers automatically causes the OS to set aside areas of memory to contain those variables and the necessary memory to calculate at that precision.

E.g.,
1 byte is the amount of data needed to store 1 character. Approximately 1MB is needed to store a 200,000 word essay.

Programming consumes memory. The ships computers need memory for all sorts of tasks. I would imagine that even programmers aboard Honoverse ships don't have the luxury of waste.

Calculations are more complicated. Assigning a fixed length variable requires less storage than for a floating point (fp) number which would be needed to represent an infinite number of missiles. And, of course, the internal calculations utilized by the algorithm would reserve necessary amounts of memory which would be unavailable to the overall system. I suspect that that luxury is not necessarily available in the Honorverse on a per navy/per tech basis.

The human brain has the ability to store about 2.5 petabytes of memory. 1PB is 1 quadrillion bytes.


"When we talk about data capacities in this range, it becomes hard to visualize just how grand of a scale we’re talking about. If 1 TB is a trillion bytes, then 1 PB is a quadrillion bytes. In scientific notation, that’s 10¹⁵! It is difficult to imagine such a number.

AT&T transfers approximately 20 petabytes of data through its network every day.

Our Milky Way Galaxy is home to approximately 2 hundred billion stars. If each individual star was a single byte, then we would need 5,000 Milky Way Galaxies to reach 1 PB of data."


We automatically think that Honorverse computers will be so powerful. And they will be! What we fail to realize, is that that power will still not be enough for Honorverse needs. The more powerful the computer and the more vast the memory, the more things we can and most assuredly will, create for that available level of power and memory. Build it and our needs will fill it.

Has anyone failed to notice that the inevitable rise in memory and storage is being spoken for and consumed more and more by each iteration of the OS? Whereas a 64GB tablet computer may only have about 15GB free?! The original DOS ran on a 360k floppy disk with plenty of room to spare. I have an old Linux OS that I've configured to run in 35MB of space!

Therefore, it isn't ridiculous to assume that even in the Honorverse, programmers have to be wary of wasting system resources, and setting aside floating point variables capable of handling thousands of missiles which will automatically cause the algorithm to prompt the OS to reserve the required amount of memory storage for a threat environment that is never expected, is wasteful. These computers have other things to do as well.

In summation...

The Honorverse algorithms necessary to pull off the handling of tens of thousands of missiles at Honorverse speeds require huge amounts of memory atop an assuredly resource hungry OS that is staggering for even Honorverse computers. It does not surprise me that programmers have to save memory where logically necessary, causing a SLN programmer coding for a threat environment of 10,000 missiles to simply not be a given.

Son, your mother says I have to hang you. Personally I don't think this is a capital offense. But if I don't hang you, she's gonna hang me and frankly, I'm not the one in trouble. —cthia's father. Incident in ? Axiom of Common Sense
Top
Re: Hacking 2000 years from now...
Post by Jonathan_S   » Mon Oct 16, 2017 12:38 pm

Jonathan_S
Fleet Admiral

Posts: 8269
Joined: Fri Jun 24, 2011 2:01 pm
Location: Virginia, USA

cthia wrote:In summation. The Honorverse algorithms necessary to pull off the handling of tens of thousands of missiles at Honorverse speeds require huge amounts of memory atop an assuredly resource hungry OS that is staggering for even Honorverse computers. It does not surprise me that programmers have to save memory where logically necessary, causing a SLN programmer coding for a threat environment of 10,000 missiles to simply not be a given.

Very true. And then you get into the joy of government contracts where even if your programmers figured out a brilliant way to handling 10 times as many missile tracks on the contract specified hardware you're more likely to get penalized for deviation from the RFP than rewarded for delivering excess capacity.

The Request for Proposal would be based on things like the threat environment the Navy expected to face (and therefore wanted to defend against) - and should specify acceptance criteria for number of simultanious contacts and target velocity.
I'd pulled the example of 1,000 missiles out of thin air, but thinking about it that's nearly equivalent to 4 squadrons of the wall (32 SDs) worth of fire from Scientist-class SDs. Still, given safety factors and possibilities of 3-4 squadrons going to concentration fire against a single squadron it seems more likely that they'd specify that ability to track salvos of 2-3,000 missiles.

But that's still grossly insufficient for the hellacious missiles swarms SD(P)s, or pod based system defenses, now routinely throw around.

If the SLN had realized it might face that environment I have every confidence that it's military contractors are capable of producing hardware and software that's up to the task of tracking and managing the defensive engagement. But they're not going to do so when the customer isn't asking for it and almost nobody realizes the threat scope has changed.
Top

Return to Honorverse