Sunday, January 22, 2017

Giving up again on Windows 7, and on Windows as my main operating system for work machines

I recently switched back to Windows 7, for work purposes, for a couple of weeks. Here's what I've learned, and why I'm switching back to Windows 10, but only inside a VM:

1. Suspend and resume, and sleep/wake functions in Windows 7 are horrible, even on decent hardware.  Windows 10 is vastly superior, faster and more stable.

2. Windows update on windows 7 is horrible and broken, and even the ability to supposedly "pause" these updates, is actually only a partial pause. There are still times when TrustedInstaller.exe is going to chew up all your I/O bandwidth and pwn your machine. 

There really is no stable way to run modern 32 bit or 64 bit Windows applications on your computer, and maintain control of your computer, unless you fight Windows by boxing it in a VM.

I'm switching back to "Windows 10 in a VM", on a very fast Solid State Drive with a huge amount of RAM. I can pause Windows 10 updates by one simple trick. Shut windows 10 off, and only use Windows for what I actually need it for, which is when I'm getting paid to write and repair Delphi code.

If anyone has read any formal write-up on what sorts of invisible installs Microsoft slides out to users, even in Windows 7, and what they do, and why, I'd like to know about it.

Other surprising things in Windows include:

* The ready boost features, which like most attempts at smart caching, show that caching and cache invalidation are truly hard problems, NP hard, nearly impossible problems.  You can speed some users up, deploy your solution, and actually end up decreasing their net system performance for THEIR workloads.  ReadyBoost in Vista was a complete disaster.   It has been scaled back and restricted in its application and it rarely hurts you on Windows 7, and on Windows 10, it has been coralled and controlled enough that it rarely has a downside.

* The windows search indexing feature, and the windows defender anti-virus active mode, can radically cripple your system performance by adding I/O latency, stealing I/O bandwidth and CPU resources, and of course, all of it steals RAM.

When I have performance problems on Windows, I am used to switching to the I/O monitoring features of the Windows Resource Manager.  Most often, my computer, when slow is I/O bound, it is seldom ram/swap bound, and most often its programs that are doing things I literally do not care about, taking away the resources that should be doing work I care about.

I do not see a solution here, since Microsoft does not permit you low level control of your own system. At one time, in the Windows 2000, NT, XP era, Microsoft permitted you to turn off ANY service in Windows, and the list of services that ran in the background was quite manageable. The list of processes on a freshly booted Windows system is so large, and so much stuff runs now, that you just better realize that about 8 GB of RAM, and about 4 MB/s of your total I/O bandwidth, and at least one full core of CPU, is off the table, that's reserved for Microsoft, and whatever it decides to use your computer for.   If sometimes it finishes and you can actually use your WHOLE computer for a few minutes, you should really be thanking Microsoft and not so annoyed that, on the whole, Microsoft thinks your computer should be doing something other than compiling your code or serving up a database to your users, that 75% of your computer is yours,  most of the time, and that 25% of it is Microsoft's most of the time.

Windows 10 then introduces this "active hours" configuration, which just might be the critical factor in deciding whether to use Windows 10 or Windows 7 for serious professional development work. Microsoft has at least decided that, for now, during the hours of 8 AM to 7 PM, I am permitted a modicum of control over what happens on my computer.  Thanks, Satya.






Saturday, January 7, 2017

Delphi 10.1 Berlin Pro Tip: Renaming CommunityToolbar bpl and good riddance to it.


In Delphi 10.1, without all the latest updates, if you get Delphi startup crashes, you can get around them in some cases, by removing optional parts of Delphi that are taking delphi down.   One such piece I usually rename so that it can't get re-enabled again, is the community toolbar IDE plugin bpl,

I go into the Program Files (x86), folder, then Embarcadero\Studio\18.0\bin, then rename CommunityToolbar240.bpl, by putting a tilde (~) on the front of the name.

Not only does Delphi 10.1 berlin RTM  actually start up without access violations and crashes, an unsightly and useless toolbar is gone from the IDE. I consider that win-win.

I believe the above issue must have been  fixed in Update 1 or Update 2.  But given with how much of a pain installing so called updates that are really not updates but full reinstalls is you can't really blame me, can you? I work in a variety of VMs and  it's not a one time reinstall for me, but a series of VMs with a series of Delphi versions in each.    I have had crashes in the community toolbar ever since whatever version it was first introduced in, and it remains one of the ugliest and most pointless IDE plugins ever.

There are other IDE plugin bits which are optional in Delphi, and if you're having a startup crash, one of the other things i sometimes have tried disabling is the debugger plugins for the C++ builder (if you have the full "RAD Studio" SKU, you have both), and a few other things which have historically been bad.

If you're running out of memory or you find the IDE is slow, it's also helpful to remove bits that might be causing that, and see if the problem goes away, even if only as a technique to find and isolate, and kindly report your findings in the Quality Portal.

If renaming the dll/bpl doesn't make things better, you can always change your environment back to its original state, by putting it back to its original name.

I also find that if my registry settings are to blame, renaming the current-user embarcadero BDS registry hive to "BDS_old" and then starting with a fresh settings, and see if the crash/problem goes away is also a helpful step in bisecting and understanding bad Delphi IDE states.  Clearly the rename is going to nuke all your local configuration settings and get you back to install-time defaults.  If your environment is self contained and easily set up again by just running some package-install and library-path setup tool, this technique can be doubly useful.

Saturday, December 31, 2016

Windows 7 Stinks


Since Microsoft won't let me decide when Windows 10 updates can be applied, I've decided to experiment working from Windows 7 for a while. So far here are the things which stink in Windows 7.  Windows 7 is terrible, but it's better than Windows 10.


 1. Fresh Windows 7/SP1 installs won't update. 

 In order to get windows updates running on a freshly imaged Windows 7 machine, you need to follow some Microsoft workarounds, involving sideloading two or more KBxxx updates. Because of the very bugs you're working around, these updates freeze and will not install until a large delay (up to an hour) after you start installing them. To work around this insanity, it can help to try stopping WUAUSERV (net stop wuauserv) from an elevated command prompt, and disabling WiFi and network connections, so that Windows update can't start a scan for updates, which puts Windows into a stupid mode where it won't install. Hat tip to Glen Dufke for this tip: Download and then disable wifi, install KB3020369 and reboot, then download and install KB3172605, read notes in support article KB3200747.

 2. Powershell 2.0

 I am a daily powershell user, it's my primary shell environment, and PowerShell 2.0 is unspeakably lame. I use PoshGIT. To use PoshGIT and most other stuff, you should update the Windows Management Framework 4.0 which will update PowerShell to a more respectable 4.0.

 3. The old and crappy Console Host (text mode applications in windows 7) 

 Recently in Windows 10, since the anniversary update, the console host is as nice to use as any Linux console host. Most notably copy and paste works properly between windows apps and the shell. In classic Windows 7 console you have to use the horribly clumsy Alt-F10+Cursor-Keys hack to paste into the shell with a keyboard shortcut. Or you can use your mouse. MOUSE. Use the MOUSE to paste because Microsoft didn't set up a keyboard shortcut for pasting. It's hilariously awful to go back to the Windows 7 command prompt, or powershell in Windows 7, when you're used to a sane and useful thing like the Windows 10 shell.

Concluding Rant

 So will I stay on this configuration? I believe that in spite of the things I lose when I move back to Windows 7, after almost two years on Windows 10 (and Windows 8.1), that the one thing I get back is going to be worth it. I need a work machine that doesn't decide that now would be a wonderful time to install updates. It feels to me like Windows 10 does not belong to me, it belongs to Microsoft. It doesn't even notify me when it decides it wants 100% of my hard disk bandwidth. Windows 7 is not perfect in this respect, due to bugs and other weird Windows 7 features, sometimes Microsoft's own core services will go insane on you, performing a local denial of service attack on its own users. How can Microsoft remain as user-hostile as it has clearly been in the Windows 10 era, and retain its customer base? If you want a Windows 10 machine to behave according to business-friendly and work-friendly rules, the best you can do is to buy their Enterprise features, and set up group policies to disable windows updates. A recent update to windows 10 lets you set "active hours" and keep updates from happening except inside those hours, but that has not worked for me. Frequently I will still get to work in the morning and be greeted by a "windows will now reboot and finish updates". This has eaten hours of my time, and each time it happens, I get even more upset. Windows 10 is free, and worth the price. Until Microsoft grants people the right to own and fully control their own computers, I think that using Windows 10 for professional work purposes is insane. I used to worry about IT departments locking PCs down on me so that they became useless. Now Microsoft has cut the IT departments of the world out of the loop. If you don't have Windows 10 enterprise, Microsoft is your IT department, and they've decided you're not to be trusted with something as important as the management of your own PC.

Wednesday, September 21, 2016

Delphi Features I Avoid Using And Believe Need to be Gradually Eliminated from Codebases

My guest post from L.V. didn't seem to have enough Delphi specifics for one commenter, so I thought about it, and realized, that what L.V. is talking about is Practices (stuff people do), not features.

But there are features in Delphi that I think are either over-used, or used inappropriately, or used indiscriminately, or which should almost never be used, since better alternatives almost always will exist.  Time for that list. No humorous guest-posting persona for this post, sorry, just my straight opinions.

1. WITH statement

This one is hardly surprising to be in the list, as it's one of the most controversial features in the Delphi language. I believe that it is almost always better to use a local variable with a short name, creating an unambiguous and readable statement, instead of using a WITH.  A double with is much more confusing than a single WITH, but all WITH statements in application layer code should be eliminated, over time, as you perform other bug fix and feature work on a codebase.

2. DFM Inheritance

I don't mind having TApplicationForm  inherit non-visually from a TApplicationBaseForm that doesn't have a DFM associated with it, but I find that maintenance and ongoing development of forms making use of DFM inheritance is problematic.  There can be crazy issues, and it's very difficult to make changes to an upstream form and understand all the potential problems downstream. This is especially true when a set of form inheritances grows larger.     I have even forced non-visual-inheritance using an interposing class, and found that IDE stability, and ease of working with a codebase is improved.

3. Frames

The problems with frames and with DFM-inherited are overlapping, but Frames have the additional troubling property of being hard to make visually fit and look good.  You can't really assume that any change in the base frame control's original positions will be overridden or not, you just don't know. Trying to move anything around in a frame is an exercise in frustration.  I prefer to compose parts of complex UIs at runtime instead of at designtime.

4. Visual Binding

I have had nothing but trouble with Visual Binding.  It seems that putting complex webs of things into a design-time environment is not a net win for readability, clarity, and maintainability. I would rather read completely readable code, and not deal with bindings.  Probably there are some small uses for visual binding, but I have not found them. My philosophy is to avoid it. It's a cool feature, when it works.  But the end result is as much fun as a mega-form.

5. Untyped Parameters in User Procedures or Functions

Modern Pascal should be done with PByte rather than the old way of handling "void *" types (if you know C) in Pascal is the untyped var syntax. If possible, I prefer to use PByte which I consider much more modern way of working.  I believe the two are more or less equivalent in capabilities, and that Delphi still contains untyped var params for historical compatibility reasons, but unless I'm writing a TStream and must overload a method that already has this signature, I prefer not to introduce any more anachronisms like that in my code.

6. Classic Pascal File IO Procedures

Streams should have replaced the use of AssignFile, Reset, Rewrite, CloseFile.

7. Unnecessary Use of Pointer Types and Operations in Application Layer Code

In low level component code, with unit tests, perhaps sometimes, pointer types and operations will be used. To implement your own linked list of value types which are not already implicitly by reference, but in application layer (form, data module) code that most Delphi shops spend 90% of their time, introducing raw pointer operations into the application code is almost always going to make me require it to be changed, if I'm doing a code-review.   Delphi is a compiled "somewhat strongly typed" language, and I'm most happy with application layer code that does not peel away the safety that the type system gives me.

8. Use of ShortString Types with Length Delimiters, in or out of Records

Perhaps in the 1980s, a pascal file of a record type, with packed records made sense. These days, it's a defect in your code.  The problem is once such a pattern is in your code, it's very difficult to remove it.  So while an existing legacy application may contain a lot of code like that, I believe a "no more" rule has to be set up, and module by module, the unsafe and unportable stuff will have to be retired, replaced, or updated.  The amount of pain this kind of thing causes in real codebases that I have seen that used it, is hard to overstate.

9. Use of Assignable (Non Constant) Consts

A compiler flag {$J+} in Delphi allows constants to be overwritten. It should never be used.




Tuesday, September 13, 2016

Delphi Worst Practices, The Path to the Dark Side

Guest Post from L.V.




If you want to do the worst possible job at being a Delphi developer, and go from merely weak, to positively devastating, and you want to give your employer the greatest chance of failing completely, making your users hate your product, and going out of business, while exacting the maximum amount of pain and suffering on all around you, if you wish everyone to fear your approaching footsteps, and to be powerless to cross you, here are some startlingly effective worst practices to consider.

Many require very little effort from you, other than occasionally putting your foot down and insisting that certain things are sacred and can't be changed, or that everything is bad and must be changed immediately, no matter what the cost.   It is important that the team never sense that they have the collective ability to go around you, and reinstate optimizations that undo your careful work to make things worse.  A strict hierarchical authoritarian power structure is key to maintaining steady progress towards pessimization.

No matter how bad things are, you can always find a way to make things a little worse.   I can't claim to have invented any of these, and I believe all of these are extremely popular techniques in Delphi shops around the world, and so it seems there is great interest in doing as bad a job as possible.  If I can contribute something to the art, it will be in synthesizing all the techniques of all the pessimization masters who have come before.

Now that you have considered whether you want to go there or not, I will share my secrets.
Here is the path that leads to the dark side...

1. Ignore Lots of Exception Types in the Delphi IDE

The more exceptions you ignore, the less aware of your actual runtime behaviors you will be.  Encourage other developers to ignore exceptions.  Suppress the desire to know what is going on, and become as detached as possible from reality.   The optimum practice is to ignore only EAbort and exceptions similar to it, like the Indy disconnect exception.  So the pessimum practice is to disable break on exception forever, or to add a very large number of classes to the delphi Exception Ignore.  Also make very sure that you ignore access violations.

2. Raise lots of  Exceptions, even for things which you didn't need an exception for.

This one is great, because you will annoy all developers and get them to ignore certain exception types.  Old code that uses StrToInt that could have used StrToIntDef will eventually make users ignore all manner of exceptions.

3. Try...Ignore

This worst-practice (or anti-pattern) can cause you more grief than any other worst practice:

   try
      MaybeDoAllOrPartOfSomeThing;
   except
   end;

To be maximally evil, don't even write a comment. Make every reader guess why you felt that not even logging an exception, and not even trying to restrict your exception handling to a specific sane type of thing to catch and ignore (like EAbort).  Make them wonder what kind of  evil things lurk below there, and how much memory corruption is being silently hidden.  Dare them to remove this kludge of doom that you have imposed.

4. Make your debug builds unable to ever run with Range Checking on, Overflow checking on, even if a developer wants to use it for a while.


While it can be a best practice to ship your release builds with Range Checking, and Overflow Checking off, because the effects to your customer of some relatively benign thing blowing up on them in release, that you can't predict or prevent, it can be a remarkably effective worst practice to build a giant codebase where you don't bother to explicitly turn OFF range checking and overflow checking and I/O checking where it's KNOWN to be generating false positives.     In codebases where I can turn on Range Checking and Overflow checking in particular, in my Developer Machine Debug builds, I often find my effectiveness in finding bugs is increased many times.  Those who want to pessimize their entire team's work, will want to make using such powerful tools that can be used for good, out of reach.

Note that turning on Range Checking and Overflow checking in Release builds could be a form of pessimization, because it's hard to guarantee that they won't have unknown effects.  Most of all, changing these defaults to anything other than what you've always had them at, is injecting a massive amount of chaos, and good developers will often state that this should be avoided in release builds.   You might be able to inject this kind of random evil chaos without anyone noticing, if for example, you can arrange for builds to be done on your machine instead of on a build server.  

5.  Permit Privileged Behavior By Developers with God-Like Egos

Unlike self-organized Agile teams, where rules apply the same to everybody, make at least one person on your team a God Like Developer, who can do things that other developers are not allowed to do.   Ugly pointer hackery, and evil kludges are okay, if you're this guy, and totally unacceptable if it's anybody else.   To really fully pessimize your team and your codebase, let this guy randomly refactor anything he wants to without asking anybody else's permission.  These God-Like developers can review other people's code, but don't need their code reviewed, because they never make mistakes.


6. Don't Document Anything

This is one of the easiest ways to pessimize, it requires basically no effort from you, and all things having to do with software teams and processes, will generally tend to rot on their own.  It is consequently one of the most popular ways of pessimization.  Sometimes you will need to quote the Agile Manifesto or people will accuse you of having evil motives. Quoting the Agile Manifesto will get these people to shut up.

7. Argue About Indentation

By now things are bad, and significant developer attention will be focused on improving things, undoing your careful work of Pessimization. Instead of letting the team focus on fixing core engineering mistakes and technical debt, redirect the team to consider more carefully the effects of one indentation style over another, and various formatting issues, or comment block styles.

8. Magical Unicorn Build Process, and the Voldemort Build Process

I call these special non-reproducible builds "Magical Unicorn Builds" because it is entirely possible that the one PC where the builds occur is actually the only place in the universe where the code as it lives now on version control actually will build.  The secrets and accidents of the entire projects history live as non-annotated non-recorded bits of state on that PC. Contents of the Registry. Contents of various folders that contain component source code that is not kept in source control, and will naturally tend to be slightly different on various machines, and there will be no way to assure that known and controlled set of input data created a traceable end product.   Lists of Tools that are required for the product to build will not exist, we don't need no stinking documentation.  For bonus Pessimization points, the build should not be done via a build.cmd batch script or a CI tool like FinalBuilder, but should instead require a bunch of Arcane and Undocumented actions performed Manually by the High Priest of the Dark Art of Building the Product.  In such a build, we may in fact get all the way to the Voldemort Build.   The Voldemort build is a secret known only to one developer, who we will call Voldemort. Voldemort knows arcane and terrible things that would make you weep, which must never be written down, or shared at all.  Only Voldemort knows the ultimate price of his own power, and he is willing to take any action to protect his own interests.

If you do all of these things, you may be very near being as bad as it is possible to be, and may become a Dark Lord some day.  It will take some hard work, but I'm sure you can do it. Go get 'em, tiger.

Please share your own worst practices in the comment box.  Together, we can rule the Galaxy.




Tuesday, August 30, 2016

Nexus Quality Suite: Why Profiling and Checking Your Application for Leaks is Essential (Part 1 of a review of Nexus Quality Suite 1.60)

I've been using and also experimenting with Nexus Quality Suite on and off for the past 9 months and I've been meaning to write up a blog post about it.  The trouble with reviewing this software suite is that it contains so much stuff, I am aware that I can only skim the surface.  So I think I'll present it in small meaningful little task-oriented mini reviews.  Initially I was running the tools in this suite on an extremely large Delphi system.  While it's definitely useful for very large systems, I found it difficult to explain that usefulness using that large application.

So I've decided to keep my real world focus in reviewing this tool, but I'm picking a bit of my own personal code to profile and test.  I'm going to run Nexus Quality Suite's tools against a little application I first wrote in about 1996, that is in my toolkit of "system admin and developer-operations" tools.   Here's what it looks like:


It can ping any number of hosts from one to hundreds. When any of those hosts goes offline (does not respond to ICMP ping), or the DNS resolver stops resolving, this little tool can beep (for in office monitoring) or send an email (which can alert me even when I'm out of the office).   But this tool has always been slow, slow slow.    Since I add additional sleep time (configurable) between its runs, I've never worried about the performance of it, but I recently had a use for this tool again, so I dusted off the source code, added a few little things, and recompiled it in Delphi 10.1 Berlin. I even found a missed out "Unicode port" bug where I had forced a cast to AnsiString over a UnicodeString in a way that actually resulted in sending Unicode bytes into an ANSI Windows API. Bad Warren! No cookie for you!  My only excuse is that I wrote the code in question in 1996, in Delphi 2, and simply overlooked it when porting this code to Unicode Delphi.  Now back to my review...

Anyways, back to the performance profiling tools.  The latest version of Nexus Quality Suite 1.60 supports both 32 bit and 64 bit programs, but I would recommend profiling your 32 bit tools, as the 32 bit tools are probably easier to profile.   For those cases where you really want to profile 64 bit stuff now you can.   The NQS installer installs a group of items in your tools menu.   Be aware that certain Delphi versions have a bug, which has a workaround available, and that the installer for Nexus Quality Quite actually warns you about that. This is good customer service right here.   Good job, Nexus, and thanks Andreas Hausladen.

Here's the installer warning. I have XE4, XE8, and 10.1 Berlin on my computer right now, and this is what I saw:


After installation, here's the menu items. There are too many tools in here to cover them all in one review, but I'm going to quickly show one application run through two of the tools.


The first tool in this review is brand new, I think. The Block Timer application is a new profiler tool based on the other profiler tools, but with some new capabilities.   I asked support and was told that more documentation is coming soon. The Block Timer joins its partner the classic Method Timer in providing some pretty great time-based profiling capabilities for your Delphi applications.  Here is a summary of the features of the new block timer compared to the existing method timer and line timer profilers:


1. The block timer is thread aware, and can break down information into thread by thread values, whereas all times are combined for all threads in the other profilers.

2.  The block timer can accurately report information about time spent in recursive methods.

3. The extra overhead of doing all that extra profiler makes the overhead of running the profiler a bit higher.

4. No dynamic profiling in this one. You loose the trigger feature from the LT profiler, which is an important feature. It's worth switching to LT when you need triggers.

So far it seems to me that in smaller applications, with fewer procedures selected for profiling, the application overhead of the most intensive techniques (BT) produces the most interesting results. The larger the application, and the larger the cross section of the application methods I want to test, the more the classic lower-overhead MT and LT profilers are useful.

Configuring your application to work with this or other profiler tools is pretty consistent, the same steps are necessary for this tool, and for any other sampling profiler or other runtime analyzer tools. Turn on TD32 debug symbols from Project Options, in the Linker tab, in older versions, or Debug Information in the newer ones, according to the docs. 

Run the tool from the Tools menu.  Note that it's a good idea on Delphi XE through XE6 to do a full rebuild before you click the tools menu item as Delphi doesn't rebuild the target for you on those versions.

You click one tool, and the first time you do, you will probably want to do a bit of configuration. Each tool requires some slightly different configuration.  It is NOT a good idea in my opinion to profile ALL of any non-trivial application. First, because you're asking a lot of the NQS tool. Second because even if the tool can successfully gather information on 10 or 20 thousand methods, you probably can't do much with the results.   I recommend doing a little searching and probing and find some routines that matter, and include those.  The user interface is reminiscent of Outlook 2000 for most of the tools.  In the case of the Block Timer and Method Timer, you use the Routines icon, which for some few releases has included a nice Search feature, which I think I requested, and I'm gratified to see that in there.  Because my app is all about the Ping, I'm looking for the Ping methods, I want to know what they're up to...





After searching, then selecting the routines, I right click and "Enable Tracking for Selected" methods. Then I click the green triangle "play" icon to make my application-under-test start execution.   In a small application you could perhaps select everything.  But as I have learned from much experimentation, it's really better to spend a bit of time searching for methods you suspect to be relevant and enable a dozen or two dozen of those. Then drill in, and enable further layers of the code, as necessary, to get a clear picture of your system behavior.

After my program has executed long enough to get a reasonable sample, in my case, just over 5 minutes, I shut it down, and then the timing analysis results are shown:


You can also see a bit of a trend of CPU usage by your program, in total, which can be really interesting, because you might want to know "what is the program doing during these bursts of CPU activity?".



A nice feature built in is that if you have configured your source search path in the NQS project options, you can just double click on a line of interest and see the code:


If NQS tools don't show things in the font you wish, you can change the font it uses, there are individual selectable fonts, I change ALL of them to Consolas because it's the one true Code Editor font.  If you like the Raize font and you have that one around, you could pick that one.  Courier New is more to some other people's taste. If you happen to want Comic Sans, well, you're drunk, go home.



So now I want to jump from Tool to Insight.   The reason using tools like this is great, is when the insight clicks in your head. Today I just saw this line and I realized, ResolveAddress is a function, and because there's no mandatory parenthesis in Pascal method invocation, the code here looks like it's just a variable or property check, but it's actually a very expensive procedure.  Do I really need to repeat the Resolve on each ping or could my tool just periodically check that the DNS resolution is still working properly, and cache the resolve value, and do multiple ICMP pings to the IP address? In my case, I think I'm wasting a lot of cycles, and loading down my company or customer site's DNS service unnecessarily, and generating a bit of wasteful network traffic.  In my next version, even more than making my tool say 10% more CPU efficient, and 10% more network efficient, I might also make it a bit more configurable, say, let the user configure how often to check DNS resolve for my important host is working.


I also think I should write the code above, so that it's clear that the above is not just a check-value but actually that a function is invoked.   I really think I need to rewrite lots of internals in TICMP.

But what else could be wrong with my code other than it's wasteful? How about Memory Leaks.   So I am now going to switch to Code Watch.  Only a few minutes to try it out, and I found that although my background worker thread terminates, it is never freed, and I have a code leak.  This tool finds the problem and reports the source line. Additionally it also found some API failures that I may or may not have been aware of, and Win32 resources (thread handle) that was leaked.  This is awesome.



I'm going to wrap up now. I hope that all the above impresses you, because it sure impresses me.

Before I wrap up, I'll briefly compare this option to your only other real option for this kind of tool.   SmartBear's AQTime suite can do many of the same things that Nexus Quality Suite can do, but Nexus Quality Suite can actually do lots of things that the AQTime suite can't.    AQTime is more expensive, at $599 with a very restrictive named-single-user license, and a nasty activation and intrusive anti-piracy copy protection system that I do not very much like, because it won't let me run with a single user license inside a VM.   The copy protection actually runs a background Windows service, which detects all kinds of things like virtual machine use, and it disallows program operation inside a VM.  And the IDE integration of AQTime just crashed on me the last couple times I used it. I reported these crashes, and over several releases, the crashes never got fixed.   Sayonara, AQTime.

So what's the price for NQS?  At the promotional sale price of  $226 USD ($300 AUD), and with no intrusive copy protection that treats me as a thief, I have no problems recommending EVERY Delphi Developer and delphi using company buy this suite. There are lots of tools, and they work really well. If I had to complain about something it's that the documentation needs some further work, but they are working on that.  The product works, and when I find a problem or have a question, the technical support team is great.   The price is going up soon, so I recommend grabbing this while it's on sale.

I am planning to write some further review articles to cover this suite further, in particular I believe the automated GUI testing features in the NQS deserve their own separate review, and I think that there are many more profiling techniques that are possible to tease out very complex runtime problems in your system, not JUST to get the data to help make your program faster, or not leak memory, but also to understand complex behaviors by gathering runtime data that lets you see your program running.

In the past year, the amount of new stuff that has been delivered in the NQS is truly astounding. 64 bit support is new. I think this whole extra set of profiling tools is new.   I tested NQS on an extremely large application where I work, the product is over 5 million lines of Delphi code including all the in-house and third party component libraries, all the main forms and data modules, and other code.      In an earlier version of the tool, I was able to find a crash inside one of the NQS tools. I sent information to reproduce it to Nexus, and in the next release the product was fixed.    That's good customer service.


NQS is a tool that deserves a spot in your toolbelt too.

Full Disclosure: I received a complimentary review copy of this product, but my opinion above is 100% my own opinion, and I don't write good reviews for every product I receive a license for, in fact, quite the opposite, if I see something I dislike or I can't use, I'll say that. I'm a working coder, and I have no time for weak tools.   I have recommended that my boss buy multiple copies of this tool suite at work, where I believe it would be extremely useful.





Thursday, July 7, 2016

How to Hire the Right People? I have NO IDEA!

I have seen a lot of articles on the interwebs from frustrated job-seekers who say over and over that hiring is broken.

Where I work, I am interviewing candidates who have recently graduated from university, for a Junior Software Developer position with a focus in Web/JavaScript/HTML5.  Consequently, I have been thinking a lot about how we in the software industry interview and hire people.  Because I have been interviewing people and, I think, I have moved past the need to haze candidates.

 I was not subjected to hazing rituals when they hired me for my current gig. When I was hired, I did not write any written technical exam, the interview was verbal, but the company had one, which it would use when it felt there was some question of a candidate's abilities.  I did bring in some code running on a laptop that I could show that did some interesting stuff, and which was as close to "proving" I can code, as I could think of.   I think ideally, a personal project you have spent two or three weeks on, should be enough to demonstrate.  But there have to be alternatives, and I will get into those below.  If we're going to get rid of subjectivity, we need to replace it with something objective.

Hiring, like most management decisions, is in the end always going to be fairly subjective, and it's an area of subjective business decision making that I think is very widely done poorly, and I consider myself very poor at it but I believe I'm getting better at it.   I hope to improve by being both broader in my search for evidence, and more focused on objective hard-to-fake data.

The short version of this blog post works out to this:

I am in favor of two to four hour take-home coding exercises, and I am against two-week trial projects.  


Peppering Candidates with Random Technical Questions Is Not Working

I agree with the critics of our modern whiteboard and non-whiteboard technical hazing rituals.  

By treating all candidates the same, and asking the same barrage of questions, we hope to map a candidate's knowledge, and some are even going to claim that this approach is "rational" or "scientific" or "impartial". It's not. Because people are not bots, and technology is not as complex as you think it is, it's far more complex than you think it is.

Here's the problem with technical knowledge: It's not linear but rather factorial in complexity, like the Koch Curve, the closer you look, the more detail is generated, and there is actually no end to the complexity.  If you don't even know what I mean by that, watch this awesome talk by K Lars Lohn and then come back.   If that talk doesn't give you a reason why you should be going to technical conferences, I don't know what to do to convince you further.  There, now, I'm a thought leader.

Now back to interviews.  If an interviewer is sufficiently intelligent, I think the interviewer should start by determining from a resume and from any phone screens, the areas where the candidate expresses some interest, experience, and ability, and then talk as openly, and with as much good-will and personal charm, as is possible.  In recent weeks, I have watched people as their anxiety goes down, and I notice that what you can learn about someone who believes you are not a jerk, is much higher than someone who has their fence up.   This is a poker game where we lose if we keep our poker faces.  This interview game is a game where the best move is to fold, and show your cards.  This is what I'm looking for. I saw some of what I'm looking for in your resume. I see you mention here that you have tried Scrum and Kanban. What did you find worked and didn't work on your teams when you did those things? Let's talk about how teams work.  Let's talk about how compilers compile, how the JVM runs your code, how a statically typed language helps teams ship.  How a unit test can help you not break things, and is doubly important on a language like JavaScript where there is no compiler, and where consequently useful forms of static analysis may be impossible.  Let's talk about the recent trend towards languages which can be verified to be correct in some aspect, like D or Rust.  Let's talk about Functional programming.   With Junior programmers I'm interviewing very few have ever played with Rust or D, or F#, or Scala.  Very few can tell me about interrupt handling inside the Linux kernel, or about safe concurrency models for web-scale transaction processing, or about the differences between two transaction settings in MS SQL. 

So fine.  Let's find SOMETHING you love.  Animation? Awesome.  Games? Awesome.     Now we will dig into your own interests, and find out what you've done that we can see evidence of.

Don't I just sound so avant-garde? Trust me, I'm not.  I'm probably going to ask Juniors and Intermediates if a Stack is LIFO or FIFO.  Then I ask them whether walking into McDonald's and waiting in line to order a big mac, if that line of customers is a Stack, or a Queue.   This question might be a bit too easy in England where a line-up is actually called a Queue, but in Canada, I find that people who crammed the LIFO/FIFO part of it can't reason about it, and thus some conceptual wiring is missing in their heads, wiring that I can't quite account for.    My mental picture of a Stack is something you might remember from restaurants, if you like me, are of a certain age:


I ask about stacks and queues not because you need to know that every day when you work in my team, but because I have a distressing feeling that candidates can graduate by simply cramming and collaborating on coding projects, and can manage to retain very little of the knowledge-platform that their degree could have given them.    Which data structure would help me reverse the order of items in a list easily, a stack, or a queue?  The important thing about my question isn't if you could google it or not, it's how adept you are at thinking about systems built of large amounts of software and hardware. 

I believe that a working model of a smaller domain contributes to, and correlates well with the reasoning skill you possess in the large domain.   The human brain, confronted with systems composed of parts it does not understand tends to ascribe to others the agency for fixing and changing those systems.  When a engineer who knows how a system works understands the fundamentals, she will, I hope, be able to begin picking complex problems apart, a process I call bisecting, until she can find individual smaller problems which can be solved.  It is these bisectors of complexity that I search for when I interview.  I am looking for the developer who doesn't even know how to do this, but who believes she can do it, and who will keep trying until she does it.  Possessed of reasoning skills, and a strong set of engineering fundamentals, she is apt to succeed.

Even candidates who absorbed everything their school offered them will still need a lot of additional skills and need to learn a lot of tools.   But if you are not a learner, a sponge for knowledge in university, an organizer of systems and ideas, a bisector of problems, what rational evidence do I have that things will be different in your work life?  If you can't tell me how to troubleshoot your mom's internet connection, I'm not going to believe you can understand a Healthcare Information Systems environment.  

I recently interviewed a candidate with a Masters Degree in Computer Engineering, that I hope was simply having trouble because English was a second language.   Several days after the interview, I am wondering if I simply made the candidate so anxious and flustered, that I actually caused the interview's dismal result. Whether or not that happened in that case, it's critical that interviewers turn our dreadful critical gaze upon ourselves, find sub-par elements of our practices, and fix them.

A good interviewer needs to set candidates at ease.  When I see candidates smiling and laughing, and joking in an interview, I am happy.  I know that I'm talking with the real person, and that we can figure out what will and will not work with this candidate within this team.

I am not going to stop asking semi-random factual questions, but I am going to give candidates fair notice. I happen to like the little thing on Reddit where people ask you to "ELI5". Explain it like I'm five.  When you know something cold, you can explain it to a five year old.   This is a new knowledge-sharing phenomenon that originates with millenials.  If you're 21 right now, I'm old enough to be your dad, and then some.  Unlike some people, I think the world is going to be fine, when the Millenials take over and we're all retired.   I'm cool.

So why do I ask what DNS and DHCP are, when you could google that, and when those seem more like questions for an IT/Network-admin than for a Developer role? The argument that you can google what you don't know falls down at the point where you don't google because you're facing unknown unknowns.   Design decision mistakes are a common after-effect of unknown unknowns.  I make design decision mistakes all the time. We all do.  We do not understand the domain in which we are engineering well enough, and we do not even know what it is that we do not know. This is the unknown unknown I speak of.  I am looking for engineers who are wary, meta-cognitive, who build themselves and others up.  So let's get to my hire/no-hire criteria, and see if you agree or disagree with them.

Cardinal "Hire" Qualities (with profuse thanks to Joel Spolsky)

I want to hire someone who is SMART and CURIOUS, who GUARDS the team that GETS THINGS DONE, and WHO IS NOT A JERK.  I have grouped and expanded things in a way that makes sense to me but I freely admit that I stole almost all of this from Joel Spolsky. Thanks, man.

SMART + CURIOUS:  I am looking for evidence that you are a passionate, intelligent geek who likes to write code.  You have a deep and dividing interest in some (but usually not all) areas of computers, software development, and technology.  If I ask you how a CPU's level one and level two cache works, and you don't know that, that's OK, as long as you can answer the question "tell me about something that you built recently on your own time that you didn't have to build", or "tell me about some language or operating system  or tool that you're experimenting with".   

GUARDS + GETS THINGS DONE:   You're not just a member of a team that shipped, but a member of teams that would not have shipped without you.   Your team didn't know about version control? You taught them.  Your team didn't know about continuous integration? You added it to their practices. Your team didn't understand the zen of decoupling or the zen of test? You taught it. You modeled the practices that made your team get stuff done.  When you saw things that were bullshit, that would sap the motivation of the team to GET THINGS DONE, you faced the boss and spoke up. You, my friend, are the guardian of the customer's happiness, the guardian of the product's marketplace success, and the keeper of the flame.  Sometimes being that guardian means NOT GETTING (the wrong) THINGS DONE especially if it means doing them "wrong" just so they can be done "fast".  Long term trends that slip under the radar and that are under-valued in agile/scrum teams, are things you like to bring up at retrospectives. 

NOT A JERK:  You defuse tense situations. You don't add gasoline to open flame.  You call people out privately, and you praise people publicly.  You absorb blame. You deflect praise.   You admit when you failed to do any of the above, and resolve to do better when you don't live up to your own internal high moral standards.   You believe you can be a great engineer while valuing different people who have different communication styles, cultures, languages, and you think that the team's differences can become sources of strength, and when difficulty and division is spreading, you find ways to unify the team and give it a focus, a technical engineering focus, with a strong shared ethical principle.  You are a curator of good company culture.

But let's be honest about the above. The above is the person I'm trying very hard to be.  I'm trying to hire people who are trying to do some of the things I try to do. 

My questions for you guys:

  • How Do you Find Out Real Stuff about Candidates when you are conducting an interview?
  • What do you want to know when you hire or when you are seeking a job?  
    • As a candidate, do you ask who you would report to?    What do you hope to learn?
    • How do you feel about the number of people in the room? Do you think its a better sign when you are interviewed by one person, or do you think it's better when you're interviewed by three or four people?
    • Are there any "shibboleth" questions you have as a candidate?  What do you want to find out with them, even if you don't want to state your question directly, what are you trying to figure out?  I don't have a specific question but if I see signs of aggression, arrogance, or naked exercise of rank or privilege, I quietly note it to myself, and decline further interactions with a company.  One thing you certainly can't fix in a company is the culture of its leaders.
  • When you are being interviewed, how should people approach you to find out the most accurate picture of your strengths and weaknesses?

I'd like to open the floor to a discussion now, let's keep it civil. Thanks.