On parallel class hierarchies and generics-hell in java

This article discusses the limitations and hassles of java generics when dealing with “parallel class hierarchies”. A parallel class hierarchy is the situation where you have a “main” class hierarchy and a “secondary” class hierarchy that mirrors the “main” hierarchy. The secondary class models some (isolatable/groupable) aspect of the main class that can and should be separated into another class. This frequently shows up in classical object or relational data models.

To get concrete, let’s start with two main classes: Equipment and Structure. Equipment is generally mobile, whereas Structure is not. Both are subclasses of a common abstract class PhysicalItem. The class diagram and java classes are shown below. Fields are omitted.

mermaid-diagram-2025-07-06-155622
public abstract static class PhysicalItem {}
public static class Equipment extends PhysicalItem {}
public static class Structure extends PhysicalItem {}

Now let’s add a “Status” attribute to this hierarchy. A Status is a collection of properties that changes tofether (and frequently), thus we wish to put them into a separate class, rather than the main class.

mermaid-diagram-2025-07-06-160151
public abstract static class PhysicalItem {}
public static class Equipment extends PhysicalItem {}
public static class Structure extends PhysicalItem {}

public abstract static class PhysicalItemStatus {}
public static class EquipmentStatus extends PhysicalItemStatus {}
public static class StructureStatus extends PhysicalItemStatus {}

The data modellers draw an arrow (directional or bidirectional) between the PhysicalItem and PhysicalItemStatus, note the need for the subclasses to handle the appropriate class casting, and consider themselves done.

The programmers have multiple ways to implement that arrow. They can add a field named “status” to the PhysicalItem of type PhysicalItemStatus. And/or they can add a field “item” to the PhysicalItemStatus of type PhysicalItem. Those fields can be in the superclass or in the subclasses. Another option is to create a 3rd class to hold the two fields “status” and “item”, but that is hardly ever done.

For the point of this article, we’re going to choose the first option: Add a field named “status” to the PhysicalItem of type PhysicalItemStatus. So we add getter methods to the PhysicalItem class hierarchy:

public abstract static class PhysicalItem {

    protected PhysicalItemStatus status;

    public PhysicalItemStatus getStatus() {
        return status;
    }
}

public static class Equipment extends PhysicalItem {
    @Override public EquipmentStatus getStatus() {
        return (EquipmentStatus) status;
    }
}

public static class Structure extends PhysicalItem {
    @Override public StructureStatus getStatus() {
        return (StructureStatus) status;
    }
}

Everything is perfect. Notice how our IDE (integration development environment) is aware of the method overrides and automatically knows the correct type.

public static void showCastingIsAutomatic(PhysicalItem item, Equipment equipment, Structure structure) {
    PhysicalItemStatus itemStatus = item.getStatus();
    EquipmentStatus equipmentStatus = equipment.getStatus();
    StructureStatus structureStatus = structure.getStatus();
}

Unfortunately, that was the last of the good news. Now we have to deal with the setters:

It is convenient to add a setter method to the PhysicalItem. But now the IDE (and java) allows runtime errors:

public abstract static class PhysicalItem {
 protected PhysicalItemStatus status;

    public void setStatus(PhysicalItemStatus status) {
        this.status = status;
    }
}

public static void problemsWithSetter(PhysicalItem item, Equipment equipment, Structure structure,
         PhysicalItemStatus itemStatus, EquipmentStatus equipmentStatus, StructureStatus structureStatus) {
    item.setStatus(itemStatus); //fine
    equipment.setStatus(equipmentStatus); //fine
    structure.setStatus(structureStatus); //fine

    equipment.setStatus(structureStatus); //WRONG TYPE. RUNTIME ERROR
    structure.setStatus(equipmentStatus); //WRONG TYPE. RUNTIME ERROR
}

We can add setter methods to the subclass but they cannot OVERRIDE the parent setter method. So client code sees BOTH methods. It is STILL possible to cause a runtime error.

    public abstract static class PhysicalItem {
        protected PhysicalItemStatus status;

        public void setStatus(PhysicalItemStatus status) {
            this.status = status;
        }
    }

    public static class Equipment extends PhysicalItem {
        public void setStatus(EquipmentStatus status) {
            this.status = status;
        }
    }

    public static class Structure extends PhysicalItem {
        public void setStatus(StructureStatus status) {
            this.status = status;
        }
    }

The image below is the eclipse IDE showing the suggestions to complete the setter method. the problem is that we see TWO methods. One is right and the other leads to runtime errors.

2025_07_06_16_45_28_eclipse_workspace_8_standards_src_main_java_generics_example_DualInheritanceTr

Before java gained “generics” we had to live with this. The code had to be written defensively to throw errors when the wrong method was called. And the multiple method problem was so annoying we usually just lived with the single method in the superclass.

When java generics arrived, we could convert our Status class into a generic. Then we could declare our getter and setter in the superclass using the generic variable. Our subclasses now require the correct argument types and can show errors at COMPILE TIME.

public abstract static class PhysicalItem<STATUS extends PhysicalItemStatus> {

    protected STATUS status;

    public STATUS getStatus() {
        return status;
    }
    public void setStatus(STATUS status) {
        this.status = status;
    }
}

public static class Equipment extends PhysicalItem<EquipmentStatus> {}

public static class Structure extends PhysicalItem<StructureStatus> {}

public static void getterCastingIsAutomatic(PhysicalItem item, Equipment equipment, Structure structure) {
    PhysicalItemStatus itemStatus = item.getStatus();
    EquipmentStatus equipmentStatus = equipment.getStatus();
    StructureStatus structureStatus = structure.getStatus();
}

public static void noProblemsWithSubclassSetters(Equipment equipment, Structure structure,
       PhysicalItemStatus itemStatus, EquipmentStatus equipmentStatus, StructureStatus structureStatus) {
    equipment.setStatus(equipmentStatus); //fine
    structure.setStatus(structureStatus); //fine

    equipment.setStatus(structureStatus); //WRONG TYPE. COMPILER ERROR
    equipment.setStatus(itemStatus); //WRONG TYPE. COMPILER ERROR
    structure.setStatus(equipmentStatus); //WRONG TYPE. COMPILER ERROR
    structure.setStatus(itemStatus); //WRONG TYPE. COMPILER ERROR
}

public static void potentialErrorsWithSuperclassSetters(PhysicalItem item, PhysicalItemStatus itemStatus, EquipmentStatus equipmentStatus, StructureStatus structureStatus) {
    item.setStatus(itemStatus); //LEGAL BUT POTENTIAL RUNTIME ERROR
    item.setStatus(equipmentStatus); //LEGAL BUT POTENTIAL RUNTIME ERROR
    item.setStatus(structureStatus); //LEGAL BUT POTENTIAL RUNTIME ERROR
}

For those who complain about CSS in HTML pages, the oldtime joke: “CSS is the worst thing ever! With the exception of not having CSS.” Well, the same thing applies to generics. “Generics is the worst thing ever, except for not having generics.”

(Well-read readers will have immediately noted that I am stealing the phrase from Winston Churchill: “Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed it has been said that democracy is the worst form of Government except all those other forms that have been tried from time to time”.)

There isn’t a solution to using generics in this manner. They do not provide a 100% solution.

Perhaps even worse than not solving the problem, generics spread to all client code. Now ALL clients of the hierarchy must deal with generics, even if that particular code has nothing to do with status. Either you write your code to ignore generics (and ignore the compiler warnings), or you add <?> everywhere pointlessly. It is a REAL burden.

And it is a burden that multiplies. Because big data models have many aspects that we’d like to extract into secondary classes. So there is a code tug towards many generics, and we wind up referring to PhysicalItem<? extends PhysicalItemStatus, ? extends PhysicalItemHistory, ? extends PhysicalItemType, ? extends CivilOrMilitary> and more. Every time you add a new generic, you break code in potentially hundreds of places.

To be clear, java generics “disappear at the leaf classes”, i.e., those classes that have no subclasses. (If Equipment was a concrete subclass, code dealing with Equipment would have no generics.) However, much of our code will be required to reference intermediate classes (classes which have superclasses and subclasses), and that is where the pain is. Equipment has many subclasses but a lot of code just needs to know it is an Equipment. Hence, generics-hell.

After many years of fighting this, I have come to believe that the price of generics is too high to include them in our “main” classes. Generics are very useful, but they MUST be contained within the code that needs them, while all other code must be able to work without knowing about them.

My solution is a “bridge” method that converts the main class to a particular generic instance. This method takes the client from the main instance (that has no generics) to the generic secondary instance (that DOES have the generics).

In our example, there is no getter and setter for the status field on PhysicalItem. Instead, a new method toPhysicalItemStatus() returns a PhysicalItem “bridge” instance which DOES have generics.

The following is a complete example:


public abstract class PhysicalItem{
    protected PhysicalItemStatus status;

    /**
     * Delegate to a "Bridge" class that "casts" to an instance with the correct generics.
     */
    public HasPhysicalItemStatus<? extends PhysicalItemStatus> toPhysicalItemStatus() {
        return new HasPhysicalItemStatusBridge<>(this, PhysicalItemStatus.class);
    }
}

public class Equipment extends PhysicalItem {
    @Override public HasPhysicalItemStatus<EquipmentStatus> toPhysicalItemStatus() {
        return new HasPhysicalItemStatusBridge<>(this, EquipmentStatus.class);
    }
}

public class Structure extends PhysicalItem {
    @Override public HasPhysicalItemStatus<StructureStatus> toPhysicalItemStatus() {
        return new HasPhysicalItemStatusBridge<>(this, StructureStatus.class);
    }
}

public interface HasPhysicalItemStatus<STATUS extends PhysicalItemStatus>{
    STATUS getStatus();
    void setStatus(STATUS status);
}

/**
 * A "bridge" instance that switches to Status with the correct generics.
 * This is a new instance, so theoretically it requires heap space. However, everything is final, so 
 * short-lived instances will not incur instance allocation and garb age collection.
 *
 * Put this in the same package as the PhysicalItem so it can access a private/protected field (status).
 * Alternatively, can live in a different package, and the constructor needs to provide a getter and setter
 */
public final class HasPhysicalItemStatusBridge<STATUS extends PhysicalItemStatus> implements HasPhysicalItemStatus<STATUS>{
    private final PhysicalItem item;
    private final Class<STATUS> statusClass;

    public HasPhysicalItemStatusBridge(PhysicalItem item, Class<STATUS> statusClass) {
        this.statusClass = statusClass;
        this.item = item;
    }

    @SuppressWarnings("unchecked") @Override public STATUS getStatus() {
        return (STATUS) item.status;
    }

    @Override public void setStatus(STATUS newStatus) {
        assert statusClass.isInstance(newStatus); //intermediate classes need checking
        item.status = newStatus;
    }
}

public abstract class PhysicalItemStatus {}
public class EquipmentStatus extends PhysicalItemStatus {}
public class StructureStatus extends PhysicalItemStatus {}

The test code below shows the results of all the getters and setters. Subclasses have correct typing. Intermediate classes have “as correct as can be” methods but will always require some amount of runtime type checking. The possibility of runtime errors exist, but no worse than before. We catch them at runtime and throw an Exception rather than corrupting the data model.

public static void testExamples(PhysicalItem item, Equipment equipment, Structure structure, 
      PhysicalItemStatus itemStatus, EquipmentStatus equipmentStatus, StructureStatus structureStatus) {

    //Correctly typed
    HasPhysicalItemStatus<? extends PhysicalItemStatus> physicalItemBridge = item.toPhysicalItemStatus();
    HasPhysicalItemStatus<EquipmentStatus> equipmentBridge = equipment.toPhysicalItemStatus();
    HasPhysicalItemStatus<StructureStatus> structureBridge = structure.toPhysicalItemStatus();

    //Correctly typed
    PhysicalItemStatus status1 = physicalItemBridge.getStatus();
    EquipmentStatus status2 = equipmentBridge.getStatus();
    StructureStatus status3 = structureBridge.getStatus();

    //supertype has issues. (But always will)
    physicalItemBridge.setStatus(itemStatus); //COMPILE ERROR because the type is <? extends PhysicalItemStatus> instead of <PhysicalItemStatus>

    //supertype "solution" to the above error also has issues
    HasPhysicalItemStatus<PhysicalItemStatus> physicalItemStatusBridge2 = (HasPhysicalItemStatus<PhysicalItemStatus>) item.toPhysicalItemStatus(); //have to cast
    physicalItemStatusBridge2.setStatus(itemStatus); //This is now legal but a possible runtime error, so it needs runtime checking

    physicalItemBridge.setStatus(equipmentStatus); //COMPILE ERROR CORRECTLY
    physicalItemBridge.setStatus(structureStatus); //COMPILE ERROR CORRECTLY

    equipmentBridge.setStatus(itemStatus); //COMPILE ERROR CORRECTLY
    equipmentBridge.setStatus(equipmentStatus); //fine
    equipmentBridge.setStatus(structureStatus); //COMPILE ERROR CORRECTLY

    structureBridge.setStatus(itemStatus); //COMPILE ERROR CORRECTLY
    structureBridge.setStatus(equipmentStatus); //COMPILE ERROR CORRECTLY
    structureBridge.setStatus(structureStatus); //fine
}

The important point is that:

  1. The main classes are declared free of generics. So we don’t degrade the overall code base.
  2. Generics ARE provided for code that can benefit from it. The cost is one extra method that delegates/bridges to an intermediate generically-typed interface..

Death to yaml

A while back, spring started supporting yaml files as well as properties files for its configuration. Newer things sounded good so I switched to it.

My experience has been awful. Silent failures that don’t show errors and lead me down wild goose chases.

The problem is that yaml files are white-space sensitive. Indentation matters. And the indentation must be consistent: I you use 2-spaces somewhere and 3-spaces somewhere else, the indentation will not work and the parser will silently ignore the properties.

I must have had 20 of these errors. Sometimes I caught it quickly. Other times I spent an hour looking for the problem in code that wasn’t there. I’ve had enough. Back to boring properties files.

Note that we will often find yaml code on the internet that we want to use. For that, pick one of the yaml-to-properties online web pages.

Here is my last error that was the final straw:

The following is incorrect! Run this code and spring will throw a fatal error that it could not find the Datasource

spring:
 application:
  name: jsonstore-server
  main:
    banner-mode: OFF
  datasource:
    url: jdbc:postgresql://${postgres.url:localhost:5432/jsonstore}

This is correct! Can you spot the difference?

spring:
  application:
    name: jsonstore-server
  main:
    banner-mode: OFF
  datasource:
    url: jdbc:postgresql://${postgres.url:localhost:5432/jsonstore}

I rest my case.

Playwright or Selenium?

Playwright and Selenium are the two big choices when choosing a browser user-interface test programming tool. Selenium has been around a long time. Playwright is the new kid in town.

I’ve used Selenium before. It is a pita but it mostly works. Unfortunately, even 99% “mostly works” is a problem when you are running hundreds of tests. That means something is always failing. So you wind up writing all your code with “wait-until-something-is true” and “try-this-multiple-times-until-it-succeeds” features. And then it still fails once in a while, so you just have to repeat the whole test and then it works. The bigger the project, the worse this problem becomes.

In short, we use Selenium because we have to, not because we like it.

I had the opportunity to start a new project so I tried Playwright. Still a learning curve. Still requires a lot of work. But they took all the “wait-until” stuff and moved it “behind the scenes”. It still has to be done, but you, the programmer, don’t have to handle it yourself. unless you want to. Much better.

After 2 weeks working with Playwright, I was still impressed. Progress was slow but steady.  The hardest part was that I am using Vaadin as the front-end development tool and they make serious use of the shadow DOM, so each new element type took trial and error to get working. This would have been the same amount of work in Playwright and Selenium.

I was also fighting against the “best-practices” of Playwright. I like to use “id” attributes whenever I am selecting something. And I really like to use XPath. Yes, XPath can be brittle, but don’t kid yourself: UI testing is always going to be brittle. Now, Playwright doesn’t support XPath inside the shadow-dom, and I was constantly running into that problem. Eventually, everything I was doing with XPath was easily handled using CSS. For example, select the element with type “vaadin-form-layout” and attributes class=”user=prefs” and id=”userPrefsId”. So I was writing “xpath-ish” and it was easily translated into CSS “vaadin-form-layout[user=”prefs”][id=userPrefsId”]” and so forth. Anyway, personal preferences and nothing to do with the subject of Playwright vs Selenium.

And then I read this guy and he scared me:

https://javascript.plainenglish.io/playwrights-auto-waiting-is-wrong-2f9e24beb3b8

https://zhiminzhan.medium.com/waiting-strategies-for-test-steps-in-web-test-automation-aaae828eb3b3

https://zhiminzhan.medium.com/why-raw-selenium-syntax-is-better-than-cypress-and-playwright-a0f796aafc43

https://zhiminzhan.medium.com/correct-wrong-playwrights-advantage-over-selenium-part-1-playwright-is-modern-and-faster-than-0a652c7e9ee7

https://zhiminzhan.medium.com/why-raw-selenium-syntax-is-better-than-cypress-and-playwright-part-2-the-audience-matters-a8375e6918e4

https://zhiminzhan.medium.com/playwright-vs-selenium-webdriver-syntax-comparison-by-example-4ad74ca59dcc

https://medium.com/geekculture/optimize-selenium-webdriver-automated-test-scripts-speed-12d23f623a6

His arguments were reasonable but not sufficiently detailed to be definitively persuasive. To summarize at a very high level, the best arguments were:

  1. Playwright did waiting wrong.
  2. Selenium is the web-standard and google/facebook will make sure it is up to date. Playwright could get left behind.

Ok, both of these are serious accusations. And I don’t feel qualified to comment on their validity.

Emotionally, the author, Zhimin Zhan seemed a bit cranky. Certainly seems like an expert, but sometimes experts get cranky when their favorite technology gets left behind. Either possibility seemed plausible.

So I decided I would redo the last 2 weeks work in Playwright in Selenium. It only took a few hours.

As soon as I ran the same tests in Selenium, I remembered why it was always so frustrating:

The first problem was with the Save-Menu. On startup it is inactive (disabled=”true”) and my tests assert that. However, the Selenium method isEnabled() returned true. Google “selenium isenabled not working”. My God! How many years now and that is STILL broken?

We shouldn’t all have to write this crappy code:

public static boolean isEnabled(WebElement element) {
   boolean enabled1 = element.isEnabled(); //this can be wrong
   String disabled = element.getAttribute("disabled"); //this is reliable
   boolean disabled2 = disabled != null && Boolean.parseBoolean(disabled);
   if (enabled1 == disabled2) {
      System.err.println("discrepancy in isEnabled");
      enabled1 = !disabled2;
   }
   return enabled1;
}

The next problem was when I clicked on a VaadinSideNavItem. I got this error:

org.openqa.selenium.ElementClickInterceptedException: element click intercepted: Element <vaadin-side-nav-item path="domain/person" 
id="CometPersonInit-peopleNav" role="listitem" has-children="">...</vaadin-side-nav-item> is not clickable at point (127, 92).
Other element would receive the click: <html lang="en" theme="dark">...</html>

The ‘theme=”dark”‘ element is adjacent to the side-nav button. So something was wrong with the point location calculation.

Setting an implicit wait period did not work. An explicit wait period did not work either. One thing that really sucks about wait code is that it swallows the exception so you don’t see the actual problem. (You are ignoring the exception and not logging it.) So when it fails, all you know is that it did not work for X seconds; not why.

The third issue is that the Selenium isVisible() method checks the element values but does not actually check to see if the element has been scrolled into view. Playwright does it correctly (my interpretation of “isVisible” is literal). Playwright also automatically scrolls elements into view when you try and act upon them. Very nice.

So I had 3 immediate frustrations with Selenium that Playwright took care of.

I sat back and googled some more. I found a video I liked where the speaker asserted that there was no comparison between the two.

At this point I was inclined to agree with him. And I’d invested a day to verify I was making the best decision. Out with Selenium. In with Playwright.

Now I am not saying Playwright is without difficulties. The codegen tool is a miss more than a hit when I use it to try and auto-code the Locators. And I believe I have found a bug where it just fails to work correctly with chromium. That was a huge frustration that cost a week of stoppage hell until I ran out of ideas and tried firefox instead of chromium. Firefox worked, and I was able to start moving again.

Gimp batch mode with gmic

For photo work, I use Aurora HDR 2019 and Gimp. Within Gimp, I use the GMIC plugin a great deal, as it has the best noise reduction and hot pixel reduction.

The one big weakness of Gimp is lack of batch mode. You cannot record some action on an individual image, then save that action and apply it to a group of images.

And I do a lot of work on groups of images.

There is a heavy duty script processing engine in gimp, but I find it inaccessible. And I’m a programmer! The basic problem is that I am a lazy programmer and I really don’t need another language unless I really need another language.

Well there is BIMP for batch Gimp operation.

I’ve done practically no BIMP. The basic operations that BIMP provides I can can do in Irfanview.

Well I found myself needing to remove the hot pixels of a few hundred photos. The gmix function is remove_hotpixels. But this is the first time I have seen this documentation and the example is confusing. After reading this I thought I needed to write:

+remove_hotpixels _mask_size=3, _threshold=10,

And I tried many many options and nothing worked.

However, some of the simpler commands worked, so I knew it was just a matter of finding the right syntax.

This post is to record what worked. The image below shows the successful Bimp settings. (Except the input field would not expand.)

The function name is plug-in-gmic-qt

The input layer is something besides 0 (holy crap, that just generated out with no changes whatsoever and was a real pita to figure out)

The output mode is 0. Maybe some other options work but 0 works.

The command line string is “remove_hotpixels 3 10“, where 3 is the mask size and 10 is the threshold.

Don’t use a “plus” or “minus”. Don’t name your arguments

Well that is it so far. Another inch of knowledge.

 


Update from 2020:

 

Well, I have given up trying to run batch gmic from within gimp. Too much of a hassle, too iffy, and far too slow.

Instead, I installed gmic, the command line tool. You can find it at https://gmic.eu/download.html.

Better to have a horrendously hard to figure out uphill battle from the command line than the same thing from within a clunky gimp dialog.

The positive from using a command line is that we have well known tools for iterating over multiple files. This is important because the “iterating over multiple files” part of the gmic command line doesn’t seem to exist. I can only figure out how to write a gmic script to read one file and write one file.

I can’t yet even figure out how to tell gmic to read a jpg and write a tiff.

But the “remove hit pixels” command is (on a windows platform, using “cmd”):

for /r %f in (.\*) do gmic %f ^

-remove_hotpixels 3,10,,,Merged ^
-o %f

From the command line, cd to the folder containing the files you want to convert and paste the above text. It will recursively find all files in this folder and all sub-folders. It will open each file, run the “remove hotpixels 3,10” command, then save over the same file. And it will do it orders of magnitude faster than the same thing in gimp with bimp.

I also find anisoptropic smoothing to be very useful. It is a nightmare trying to find the right combination of arguments to a gmic command. The best way  is to setup the command in gimp then alter the settings to display the arguments. Still a pita, as you have to eyeball the text and re-type it into the command line; copy-paste does not work.

Here is a starting point for anisotropic smoothing:

for /r %f in (.\*) do gmic %f ^
-fx_smooth_anisotropic 80,0.7,0.3,0.6,1.1,0.8,30,2,2,0,1,0,0,50,50 ^
-o %f

The actual settings are listed here, but good luck converting the argument types to the integer command line values.

 

Not sure where the best documentation and tutorials for gmic are. Everything I have found requires you to know what everything means before being able to do anything. More notes added here as I learn.

 

https://manpages.ubuntu.com/manpages/trusty/fr/man1/gmic.1.html

http://gimpchat.com/viewtopic.php?f=10&t=19008

https://gmic.eu/tutorial/

RIP Chester Williams

The death of Chester Williams hit me very, very hard today. I’ve written before how much the World Cup Final between South Africa and New Zealand mattered … the subject of the movie Invictus. And how important Chester was to that victory and the future of South Africa. In how he delivered the first big tackle to Jonah Lomu that set the tone for the entire game. It was at that moment I began to believe we could somehow win. Chester carried a heavy weight on his shoulders as the sole black on the team. People worried he was a “token player” and it was a fair concern because he was the first black to make it. I’d been studying him intensely all tournament (who hadn’t?) and knew he deserved to be there. But sports can be cruel and heroics seemed too much to hope for. Then, as I saw Lomu shudder and collapse, I began to hope and believe … and hope and believe … and then simply hoped and prayed and hung on until the end … like all the players on both sides. The greatest game of all time in which no one could score.

I cried like a baby after that game — the only game that ever mattered so much — and the deepest tears were because of Chester. If we’d won but he’d been a liability on the field, it would have been a setback to a country’s future. Instead, he had the game of a lifetime. And whites experienced pride and love for the gift of a black man’s pure courage in an arena that they understood viscerally. Many for the first time. The celebration of Chester was perhaps the first, honest, positive feeling that all colours could experience together.

I am sorry he died so young. But he is a legend and had 24 years of that knowledge. I will never forget my admiration of and debt to him.

Aurora HDR 2019 – Questionable RAW support of Sony RX10M4

I’ve had support communication with Aurora about problems reading Sony RX10M4 RAW files, which they claim to handle. At the time, I was complaining about invalid cropping: The corners of a RAW image containing vignetting (the dark edges), and Aurora did not remove it. Aurora support said that this is normal operation.

The following image shows this artifact. Same image in RAW and JPG processed by Auror without any subsequent processing. On the left is the RAW image and you can see the vignetting. On the right is the JPG image and you can see the vignetting was cropped and the image was enlarged to the same pixel dimensions.

Note that the JPG image was generated by the camera, i.e., I am saving in RAW+JPG format.

I also use the Sony Imaging Edge app to read RAW images on the computer and save them to JPG. When I do this with the RAW image, I get the identical JPG that is stored on the camera. So this tell me that Imaging Edge DOES choose to apply the vignetting when converting RAW.

Well I can live with this decision by Aurora. But that isn’t the full story. As you can tell, the above images are not the same otherwise. For example, the colors are different. Perhaps this is due to Aurora having better information from the RAW images and making more informed decisions. I can live with this too, if it is true. Color can be altered.

It is when you get to the details that the serious problems appear: 1. Aurora has made different decisions about the cropping/expansion. 2. The RAW image has bad noise artifacts. 3. The RAW image has chromatic aberrations.

Remember that the RAW image is stored with all the imperfections: The sensor itself has noise artifacts, chromatic and lens aberrations. But the RAW image also contains the information that lets the processor compensate for them. Aurora is not doing so.

Look at a close-up image. Below is a portion of the tent on the left-hand-side. On the left the RAW image processed by Aurora. On the right is the exact same region from the JPG image.

 

First, we see that we have different shapes. This is what leads me to suspect that Aurora is not using the lens information to correct for it. (I am assuming that Imaging Edge IS correcting for it, as well as the in-camera software.)

Second, look at the noise. The RAW image contains a lot of noise. In comparison, the JPG has eliminated that noise, but now has JPG noise artifacts.

Another close-up image below, showing the noise problem again AND the chromatic problem. On the left, the RAW images show serious color error along the borders. The JPG image does not.

If Aurora is doing “correct” RAW image processing, I don’t want it. (This is why I am currently saving in RAW+JPG and only working with the RAW when I absolutely have to.) But really, I am seeing so many issues that I am not convinced that Aurora is processing the RAW correctly.

Back to tech support …

…Back from tech support. Confirmation that this is a bug.

Aurora HDR 2019 – Blowouts

Much to learn about Aurora HDR 2019 still. This particular problem is that it is importing a set of bracketed images into an HDR and the starting point has blowouts, i.e., the whites are crushed to 100% and information is lost forever. Below is a set of 9 images. These are bracketed exposures, with 1 EV separation.
After import into Aurora, this is what I see: Notes: Actually importing the RAW images. auto-alignment, ghost reduction, color denoise, chromatic aberration reduction all turned off.
Ignore the vignettes. This is a separate issue I am dealing with (Aurora processes the RAW information but does not do lens correction). I have “white highlighting” turned on. All the red areas in the image are 100% white, i.e., crushed/blown-out. If you look closely at the histogram, you will see a vertical line at the 100% mark. That is a problem.
That vertical line and the red splotches means that information is lost and cannot be recovered. No amount of adjustments in Aurora will let me fix this. An HDR should not do this. This is a bug in the tool. We know for a fact that at least 1 (actually more than 1) image in the set is not blown out at these location. That information must be retained for further editing. (Note: I proved that fact by importing only the darkest image into Aurora and noting that there were no blowouts.) I could probably fix this by removing one or more of the over-exposed images and re-importing. But that isn’t the solution. We’ll see what Aurora support has to say about it.

Sebago Resort dewinterizing – Memorial 2019

This was the first time that Robyn could not make it. We had some rain the first day, but otherwise great weather. Cold enough at night to need fires in the cabins. The lake water was really cold. My first time with waders, thanks honey. Replaced 2 dead trees and everything looking pretty good. And, of course, my new Sony RX10M4 camera.

Aurora HDR 2019 software. Sony RX10M4 5 image bracketing with 2 stop gaps. One sequence every 2 seconds. Pete Townshend “Dirty Water”. Many struggles with Aurora’s inability to accept the number “5” as the correct answer to the grouping problem.

Whassup with the Aurora HDR app sharpen halo?

Still a week 2 noob here, but going through some old HDR work to learn Aurora HDR 2019. I had an old series of HDR images from Sebago that was interesting: Boats on a dock. Boats that move constantly even on a calm day, and thus are very difficult to deal with.

The original base images are here:

base-1
base-2
base-3
base-4

I used hugin to align the images and produce the following HDR image shown below:

Hugin produced HDR image

I thought this was amazingly good. You can see the blur artifacts from the movement of the boats (the number 15; the motors, etc), but if you don’t look too closely this is a nice image.

Below is the attempt with Aurora HDR 2019, Version 1.0.0.2549. I imported the base images with Auto Alignment, Ghost Reduction, and Chromatic Aberration Reduction all turned on. Then I turned off all the Filters. (Meaning this is not going to be the final result.)

Aurora version

Obviously a fantastic job with the blurring. Amazing. I can work with this.

Except I can’t. Something is causing sharpening halos. I’ve tried turning all the filters on and off, but even with everything off, nothing eliminates those halos, afaict. So where did they come from?

Here is closeup of the problem: Look at the sides of the posts. Hugin on the left shows that it is possible to combine the images without excessive halos. Aurora on the right show unacceptable halos.

Not all my images show excessive halo. I did a series of work with a much better camera at http://www.clevercaboose.com/2019/05/07/tablerock-trip-20190414/ But now I needed to stop and figure out what is causing this.

I created a support ticket with Aurora. Quick response:

..it’s quite normal behavior of the software…the RAW photos you’ve sent to us are quite low-resolution ones, and they come out aligned pretty well considering their resolution and format. You might have achieved a better result with RAWs though. The halos you may see are not halos, but light on the photo increased by our tone-mapping powered by AI which increases the contrast.

Contrast that looks like a noob that just discovered the unsharp mask filter! I’d rather a more adult AI.

A note on the word “halo”. Specifically I am talking about overshoot and undershoot.

But perhaps the problem IS low resolution. These ARE old photos.

Try again with 2019 quality images:

hidef-base-1
hidef-base-2
hidef-base-3

Here is an HDR composite with ALL Aurora filters turned off.

ALL aurora filters turned off

The light on the horizon IS “haloed”, excuse me, AI-powered contrasting. But there is also contrast from the original images. Has Aurora added to this even with all filters turned off? Below is an area on the horizon zoomed in:

Zoomed in horizon

If there is any halo,. it is present in the originals as well. I have no cause for complaint. Aurora is excused.

Caveat: Play around with the filters (in particular the “HDR Smart Structure”) and you will find many ways to create crappy sharpening. So don’t turn the knob to 11.

So the issue is resolved. But be careful working with low resolution images. The AI isn’t very smart all the time.

Update 6/1/2019: Here is an extreme example with hi-rez images. Base images first. This is 9 images with 1EV bracketing

And here is an HDR of all 9 images. I aligned the images (this was hand-held), did no ghosting, and selected the “essential/vivid” template.

9-image HDR. Align images. No ghost reduction. Essential/vivid template.

Below is the same 9-image HDR with all the same settings EXCEPT medium ghost reduction is turned on. And here we see the problem: That is the worst halo I have ever seen.

Previously I was frustrated because I could not find a setting to turn off whatever was causing the halo. Now it is clear why: That halo is baked in when the image is created.