Ceph — Moving up

OK. I moved from Ceph Octopus to Pacific.

The reason I originally settled on Octopus was I was thinking that it was the newest release that had direct CentOS 7 support for my older servers. I was wrong. Actually CentOS 7 maxed out at Nautilus.

Octopus has been a problematic release. A bug kept scheduled cephadm tasks from executing if even the most trivial warnings existed. And trivial warnings were endemic, since there were major issues in trying to get rgw running. Caused, in part, I think, from a mish-mash of setups via older and cephadm component deployments. Certain directories are not in the same place between the two.

I also couldn’t remove failed rbs pool setups. They kept coming back after I deleted them, no matter how brutally I did so.

And finally, a major lock-up crept into the NFS server subsystem. Although it turns out that the ceph mount utility for Nautilus works just fine for me with Octopus servers so I was able to re-define a lot of NFS stuff to use Ceph directly.

Before I proceed, I should note that despite all my whinging and whining, the Ceph filesystem share has been rock solid. Much more so than Gluster was. Despite all the mayhem and random blunderings, the worst I ever did to the filesystem was trash performance when I under-deployed OSDs. Never did I have to wipe it all out and restore from backups.

OK. So I finally bit the bullet. I’d fortuitiously run across the instructions to level-up all the Ceph depoyments to be cephadm based the day before. Now I followed the simple upgrade instructions.

I had concerns because I had a number of failed rgw daemons I couldn’t get rid of, but actually the worst thing that happened was it spend all night flailing because an OSD had gone offline. Restarted that and things began to happen. Lots of progress messages. Occasional stalls (despite what one source told me it’s NOT desirable to have my Ganesha pool have no replicas!),

But amazingly it was done! Now deleted rgw dæmos stay deleted! I might even get them hooked into my VM deployments! And since one of Pacific’s improvements had to do with NFS, I hope Ganesha will be happy again.

Why I’m not using DigiSpark’s ATTiny85 in Almost Everything

The DigiStump ATTiny85 board  is a really attractive bit of hardware. It’s cheap, it can be accessed via the on-board USB connector, and while it hasn’t got the advanced hardware features of its bigger kin, there are a lot of things you can do with its small number of ports, memory and features. I’ve got a list, in fact.

And I’m not getting anything on that list done, because I cannot program the device!

A good comparison to the ATTiny85 is the ESP-01 mini-board featuring the ESP8266 processor. That’s an 8-pin board, also with relatively few connectors. In fact, it doesn’t even have a USB connector. And in theory, it should be harder to work with, since it runs an internal WiFi tcp/ip stack!

But I haven’t had any problems with the ESP-01, while my ATTiny85 units sit in a box, unused and useless.

Why? Apparently this was a hit-and-run project. The documentation, once written, has no indications that it’s being kept up-to-date. There’s a wiki, but so far it’s not been of any help.

There are several things I fault in the documentation.

  1. As mentioned, I don’t think it’s up-to-date as regards current Arduino IDEs or the OS’s they run under. Vague hints are given that certain Version 1.6 IDE’s are not suited (why?????), but the current Arduino IDE is version 1.8. What does that mean?
  2. A big part of the documentation is an animation. I don’t like animations in the middle of instructions.
    1. The motion distracts from reading the bulk of the text.
    2. You cannot print out the documentation and read/annotate it offline. Unless you’re Harry Potter rendering the Daily Prophet, animations don’t print well.
    3. If your animation is not only auto-playing, but starts playing sound when the page is opened, that’s it. I’m gone. Don’t try to sell me anything again. Ever.
  3. Pre-requisites. A side link points Linux users to some udev rules. I have problems with this, because first, the rules don’t actually explain what they are doing (If I read them correctly, they prevent the ATTiny85 from automatically creating /dev/ttyUSB and /dev/ACM devices, but don’t tell udev not to try and treat them as mountable USB drives). And secondly, there’s no explanation as to what to expect when things work right. Or, worse, what you’ll see/not see, if they’re wrong. Popular udev rules often end up as part of stock distros, so it’s important to know if you’re likely to do something that’s redundant or even counter-productive.
  4. Devices. Unlike most Arduino interfaces, I don’t think that the DigiSpark programmer actually uses any of the listed available devices on the Tools menus, but instead talks straight to hardware (presumably by scanning USB and looking for one (or more???) 16d0:0753 MCS Digistump DigiSpark units. But it would be nice if that had been explicitly mentioned, precisely because it’s not the usual mode of operation.
  5. Operation. There’s no indication of what you should see when a sketch uploads. The IDE isn’t uploading via normal channels, so its own messages are actually misleading. And it was only after a lot of looking around that I even saw a listing of something more like what’s to be expected.
  6. Diagnosis. As far as I can tell, there’s absolutely no error messages that ever print out if the IDE doesn’t connect, much less what couldn’t be connected to or possibly why. And so, I’m left frustrated, with no clue as to which of several subsystems are at fault. Much less how to diagnose or correct them.

Bottom line

Yes, it’s a nice device. Too bad I cannot use it. It takes up space in my parts box. And until Digispark spends some time and effort on making it useful, I won’t be buying any more of them. Because no matter how cheap they are, cheap and useless is too expensive. I won’t be buying any Digispark products until I hear that they’re committed to making said products usable. Even if this isn’t one of their more profitable units, it indicates how little they are willing to commit and thus a baseline on how much confidence I can expect from more advanced offerings.

I’ve got decades of experience on all sorts of equipment. It’s generally been my job to figure out how to work with new and unusual hardware and software. But this is simply more trouble than it’s worth.

Gnome Evolution is an Abomination and gnome-keyring should die in a fire!

Really.

Between Evolution’s penchant for creating non-deletable – and defective – account associations and gnome-keyring’s useless pop-up dialogs, the whole thing almost makes Microsoft Windows seem attractive.

Then again, gnome is, by and large, a slavish attempt to imitate many of Windows’ more obnoxious features. Like the Windows Registry.

Honestly. People have been complaining about this stuff for years and it never gets fixed.

The popup for gnome-keyring is especially odious, since it blocks all other user interaction (including access to pwsafe) and it LIES. It says that the Google password incorrect when it isn’t.

There are no documented fixes to speak of, short of wiping the entire OS, no one on the respective gnome development teams does anything and users get angry.

Including me. So I’m going to go take a stress pill.

Why Do-it-Yourself Java Security is a BAD THING

One of my greatest peeves in working with Java in the Enterprise is the fact that everyone+dog seems to think they can do a better job on application security than the Java architects.

OK, so there’s some really awful stuff that’s part of the Java standards, but nevertheless, rolling your own security is still a BAD THING.

Here’s why:

  1. I’m so clever. You’re not as clever as you think you are. Even I’m not as clever as I think I am, and I, of course, am much cleverer than anyone else. Most DIY security systems I’ve encountered (or, alas, developed) have proven to contain at least one easily-findable hole that you could channel the Mississippi River through.

  2. Infrastructure lock-in. Your DIY system is almost certainly tied to your current infrastructure. If the infrastructure changes, you’ll probably have to recode (see below). And, of course, if you ever had any dreams of selling your work to other shops, they probably aren’t set up the same way.

  3. Documentation and procedures. You can’t go down to the local bookstore and buy a book on how to properly use a custom security system the way you can with the standard security system. In fact, any documentation you have is likely to be insufficient and out of date.

  4. Professional Design. The standard security frameworks were designed by security professionals. Yes, I know they’re not as clever as you. But they were trained specifically to work on security first and foremost and argue with other people looking for bulletproof general-purpose solutions, then the systems were exposed to legions of evil people for stress testing. If your primary job was security, if you have extensive training in mathematical cryptanalysis, your cleverness would outweigh the fact that these people are but pale shadows of your genius. But your primary job was to develop an application and most likely, the orders weren’t to make it all perfect, just to “Git-‘R-Dun!”.

  5. Declarative security. The standard Java security systems are mostly declarative. Studies have shown that declarative is less likely to contain unexpected bugs. You can write anything in code, but when all you have is fill-in-the-blank declarative options, your opportunities to make mistakes are far fewer.

  6. Minimal coding. The standard Java security systems are generally minimally invasive. You don’t have to turn the application code upside down every time the security infrastructure changes. You can test most application code with security switched off or using a local security alternative like the tomcat-users.xml file without having to establish some sort of heavyweight alternative to the production security system (or petition the security administrator for test security accounts and privileges).

  7. Maintenance costs. When you tie security code intimately into application code, anyone coming along later to do maintenance will probably be ignorant of the nuances (remember the local bookstore? and will end up punching a hole in the security. Security is like multi-threading and interrupt handling. It only takes ONE bug to bring everything down.

  8. Development costs. When you tie security code intimately into application code, you have to do twice the work, since you have to code both the business logic and the security logic. And, again, forget to do it in just one place and the whole thing turns into tissue paper. Additionally, in-application security code is not just another spanner in the works of setting up testing and debugging frameworks, you have to debug both the security code and the application code.

  9. Framework support. Standard frameworks like Struts and JSF have built-in support for the J2EE standard security system. They don’t have built-in support for one-off security. You’re paying for it regardless of whether you use it or not. You might as well reap the benefits.

  10. Mutability. If you want to rework a webapp into a portlet or web service, the standard J2EE security system will mostly port transparently, since it’s minimally invasive. Portlets virtually demand a Single Signon solution – forcing someone to enter login credentials into each and every portlet pane will not win you friends.

Ideology.

Sounds a lot like “idiot”. And with good reason. There is no such thing as a one-size-fits-all Silver Bullet solution. Sometimes the standard security framework isn’t a good fit. About 9 times out of 10, it is, however. For the 10th case, I generally prefer to augment the standard security framework. For example, when I need fine-grained security, I use role-based access control to fence off the major sections of the app, then use the identity provided by the standard framework to retrieve the fine-grained options. Where that doesn’t work or is insufficient, I try to use minimally-invasive approaches, such as Filters and AOP crosscuts. I’ll do anything it takes, in fact, but the more I can work within the supported system, the happier I am.

Why Software Projects Should Not Depend on an IDE

disclaimer: I’m about to show my age.

My first “IDE” was an IBM 029 keypunch machine. I keyed programs in FORTRAN, COBOL, Assembler and PL/1. When I messed up, I threw away the defective cards and punched new ones. After waiting in line for a free keypunch machine.

Over the years, things got better. I graduated to online terminals, then added debuggers, code assists and refactoring.

But I also learned something. IDEs are more fluid than programming languages. As projects got more complex, they took on more and more of the responsibilities of maintaining not only program code, but the entire build process. Including tracking the locations of the build components and running the associated build utilities such as resource compilers.

Eventually it reached a point where the IDE was less an aid than an addiction. A certain IDE which Shall Remain Nameless denegrated the ability to build from the command line using constructs like batch scripts and makefiles into virtual uselessness. It was too much trouble to learn how to create a build from the command prompt.

Then, the New, Improved version of the IDE came along. But guess what? The old projects had to be modified to build under the new IDE. Then came the late-night call for an emergency code fix. The project in question hadn’t been touched in 2 years. It wouldn’t compile without the old IDE, even though the actual fix was a 1-line code change. Installing the old IDE required installing old support programs. Taken to extremes, it would have ended up with taking an old junk computer out of the closet (at 2 a.m.), installing an old OS, installing the old IDE and supporting cast, all to resolve a minor problem.

But wait – there’s more!

Years passed, I moved onto other platforms. But now I worked in a shop where there were 2 different groups of developers. The other group had developed an entire ecology of their own – all tied to their IDEs and desktop configurations. We were supposed to be sharing standards. But their ecology had been designed specifically to to their needs, not ours. They could hand us code, but it wouldn’t build because we hadn’t invested our desktops in a lot of configuration that was scattered around inside of people’s IDEs.

Contrast this with a batch-based process such as Maven or Ant.

Maven, of course, tends to force a consistent organization, for better or worse. So while the choice of goals may be unclear, at least the build is consistent.

Ant is less standardized, but it’s no big deal to make an Ant script self-descriptive.

And in both cases, you can place project/platform-specific configuration in files that can be passed to build processes and stored in the project source code archives. Instead of being embedded on people’s desktops.

When an IDE Just Won’t Serve

There are cases where an IDE simply cannot do the trick. It’s not uncommon these days – especially in Agile development shops – to publich a Nightly Build. The Nightly Build is typically a collation of the daily commits done after hours in batch. “In Batch” and “IDE” don’t go together. An IDE is an interactive environment. Furthermore, the batch build machine may be a server, not a desktop machine. Not all servers have GUIs installed – or even Windowing systems. Ant and Maven won’t have a problem with that, but IDEs will.