Six

This talk wasn’t any better than the sixth.

I understand the idea of encrypting all traffic, but it relies on two assumptions:

  1. All traffic needs to be private, and;
  2. End-user connectivity is every-expanding.

Let’s look at those assumptions one-by-one.

What’s the problem if I fetch Facebook’s favicon.ico? Why does that need to be private? There’s lots of things that people do online that aren’t the least bit objectionable. Does it matter to anyone that I ordered Pizza Hut for dinner last night? Whatever. I brushed my teeth twice yesterday, too, and used different brands of toothpaste. (The tube I took to Shmoocon was still in the suitcase, so I used the other one in the bathroom.)

Perhaps if I was looking at some nice, wholesome porn, I wouldn’t want people to know about it, but for the vast majority of my Internet use, I really couldn’t care less who could see. That that favicon.ico gets fetched multiple times per day by multiple people on my network is not a problem. Maybe there should be a way to cache that common content, so it doesn’t have to be fetched from the source every time. Like a shared cache? Squid, perhaps? Oh, but that doesn’t work when all content is encrypted. My professional experience shows that there’s many times when bandwidth availability does not increase, which brings me to point two.

There’s lots of instances where, despite your cable company bumping your cable modem speed, significantly that bandwidth has not increased.

In one of my not-too-distant past projects, we had remote sites connected by a 9600bps satellite connections. Much of the bandwidth available on these fifteen-minute-per-hour connections was spent just sending and receiving SMTP traffic How much less traffic would have been exchanged with the encryption overhead? Yes, maybe, there’s faster methods of communication available that would enable encrypted communications, but there’s also contracts in place binding payment of the slow services for years to come. Even on the ground, there’s contracts with telcos that can’t be broken, even in light of faster options. So maybe having cashe-friendly web content, and unencrypted email makes sense there? Maybe?

The EFF, and the blind promotion of arcane “net neutrality” rules don’t take any of that into consideration; they assume everyone is using a fast cable modem, or US-based cell network. No, there’s tons of people who aren’t.

So the solution is to hand the decision-making process over to an unelected group of bureaucrats relying on technology from the middle of last century?

GMAFB.

But, then, I guess I’m just not woke enough to know that I’m paying less for my mobile phone with far better data than I was before NN was repealed. Sorry ’bout that. I suppose, also, that the places with defined contracts also got faster with the FCC controlling things. Oh, they did. Totally. Those 9600bsp connections are now 10M full-duplex. Guess I missed that.

Five

I went in to this one with a fair amount of skepticism. My worries were more than verified.

IPv6 isn’t insecure because you don’t understand it, and your antiquated tools don’t work with it.

ZOMG, there’s a separate deprecated Linux firewall tool for dealing with IPv6!!1!

So write rulesets that deal with that difference.

WTF, my segment scanning tools don’t work the same way they do with the one-true-IP ™.

The v4 network stack was introduced in the Nixon Administration. My parents, half of whom are now dead, weren’t even married.

YHGTBFKM; you can alias almost any address.

Really.

One of the guys actually tried articulating that PAT (probably not NAT, guy. Maybe if you’d paid any attention in your networking classes, you’d know that).

What PAT does do is allow you to effectively wall-off your enclave to “protect” the assets inside it. You can do the same thing with a v6 netblock, too. One of the things I frequently listen to is very concerned about the “5G revolution,” and how it might allow the Chinese to control everything inside the US. Um, no. Any network security guy who’s paying attention can block things going out just as easily as he blocks things coming in.

I guess my message is: learn how to track things other than IPv4, and write your filtering rules on traffic both ways.

Four

So, Sunday’s talks.

First up was this one.

The concept is good, I suppose. The discussion of how to do something like this, dealing with manufacturers, VCs, etc..

During the talk, however, all I could think about is whether you needed to write in LISP to get funded by Y-Combinator.

After thinking about it more, however, I have to wonder how long this will be viable. Yes, it’s a good solution right now, but what about two years from now? Will this USB device be at all useful in the future. (Snark: Maybe there’s something I can look up with my CueCat to determine…)

All that said, it certainly has potential to be more secure, and useful than, say, an RSA token.

Interesting talk, though.

Three

This was perhaps the most thought-provoking talk I’ve seen so far.

That said, it wasn’t probably because of the reasons the presenters wanted.

A family member is a data scientist. He and I have had discussions about using data in the decision-making process.

Yes, this presentation presented a ton of data. That said, in my opinion, however, little of the data they collected really matters for either decision-making, or product quality.

The third speaker was from a well-known group that uses data to drive its recommendations. Much like this unnamed organizations automobile and computer recommendations, I don’t place a lot of weight in those recommendations.

In a lot of circumstances, even with all the collected data, the recommendations are really just personal preference.

I’ve run into that, too, with some of my professional experiences. A recommendation was preferred, and it was my job to doctor things so the pre-determined winner actually won.

A former customer, specifically a former GS-14, didn’t like that sort of engineering.

Perhaps I’ll find something more compelling to write about this, but things aren’t really coming together at this point. My head is swimming from all the talks today.

Two

Watched this one.

Overall, a good speech, and I swung around to speak to the speaker afterwards to see if she might know someone looking for a quick govvie hire. (I am Schedule A Disabled, Purportedly, that’s a good way to find a Federal job. Given that I’ve been looking for something like four years now, I’m not sure about that.)

She ran through a lot of the numbers about InfoSec job prospects. She did touch on the thing that I’m seeing far too often, people with store-bought degrees or “certifications,” who can’t do much of anything other than play Minesweeper. Memorizing things, then taking a purely multi-choice test says nothing about your ability to figure out how to deal with something that isn’t a lab example.

She did change my mind, a bit, on certifications that check up on current knowledge.

I can’t say, though, that the CompTIA family does that. Every time I study to win their latest Minesweeper release, I have to unlearn so many things just to pass the damned test.

One

Watched this, and ended up being the one one to ask a question.

(HTF does the non-coder guy with the scarred brain end up being the only one who asks a question…?)

I understand what he was doing, but I’m not understanding how you could gather any real useful information from the tool unless you have access to the running binary’s source.

The bit he’s using relies on use of the fork() function.

Maybe that’s still widely in use. Perhaps it’s one of the lazy programming techniques facilitated by fast machines, and virtualization. I don’t know. I haven’t written a line of code in probably a decade.

But even for sloppily-written kludges, you can really restrict what binaries can do, with things like setting maximums on processes that can be forked. Hell, one of the old ways to crash a system was a fork bomb; any admin worth a shit would easily be able to prevent that from working these days.

From the coding side, look at this.

The thing to do if running a problematic program, though, is be really stingy with things that could be exploited. This relies on child processes; prevent them by tracking the number of processes created with a clean binary.

Add to that things like cryptographic hashes on the binaries, and irrelevant.

Now this stuff might be useful if you can test binaries in a lab prior to deployment, but I don’t think that’s what the speaker was really getting at.