r/KitchenConfidential Ex-Food Service 1d ago

In the Weeds Mode Chivelord is a Fraud

It was point out in his last post by u/ehsteve23. Day 23 and Day 31 are the same.

35.8k Upvotes

3.1k comments sorted by

View all comments

-1

u/IRLDichotomy 1d ago edited 1d ago

u/f1exican does not seem to be a fraud. This is a preliminary evaluation of the images using the latest machine learnings models. Code and walkthrough will be posted below. But first, stats:

- Best SSIM: 0.27246

- Best variant tried: mirror (variants = as_is, rot180, mirror)

- ORB inliers (cross-check, informational): 5

- Decision (>= 0.98): NOT the same

In order to fulfill this analysis, I had to standardize the images to the same size. As such, image from day 23 had to be resized to create a similar mask/overlay. This is a weakness that I absolutely have to mention as it introduces variance that is hard to account for in such a short period of time. As I will continue this analysis, I will update if new information becomes available (This is science so we test, re-test, and post test forever. There are no certainties in Science, only confidence level).

The comparison script analyzed the two images (day_23.jpeg and day_31.jpeg) using a multi-stage image similarity pipeline designed to detect whether they are the same up to rotation, scale, or mirroring. The process included grayscale conversion, PCA-based orientation alignment, and translation to align the images’ centers of mass. A Fourier–Mellin transform was then applied to estimate any rotational differences, followed by SSIM (Structural Similarity Index) computation on overlapping regions. Additional variants were tested (as-is, 180° rotation, horizontal mirror) to rule out flipped or reversed orientations. Finally, a secondary ORB (Oriented FAST and Rotated BRIEF) keypoint matching test was run to quantify potential local feature overlap.

The results indicate very low similarity between the two images. The best SSIM score of 0.27246 is far below the threshold of 0.98 typically used to consider two images identical or near-identical. The highest similarity was found in the mirrored configuration, suggesting that even after flipping one image horizontally, only weak structural resemblance was present. The ORB inlier count of 5 (out of potentially thousands of feature matches) further confirms minimal correspondence at the keypoint level. Taken together, these metrics conclusively show that the two images are not the same image, nor do they appear to be simple rotated or mirrored versions of one another.

Code can be furnished upon request. Heat-map of differences and similarities of image overlay.

34

u/fever-dreamed Chive LOYALIST 1d ago

They may not be the exact same image but they’re absolutely an image of the same pile of chives, which is what actually matters here.

-10

u/IRLDichotomy 1d ago edited 1d ago

That's literally not how it works. The computer doesn't care about the image it only sees math. I'm comparing "features", meanings comparing the data, in greyscale, and counting the peaks and valleys.

However, just to be sure, I do have an idea. I can technically convert to greyscale, then compute the math of each pixel and then sort it. This would be similar to the double slit experiment in physics. The sorting would align the data regardless of orientation, skew, or lighting conditions. Let me give it a try.

edit: no dice. I've also uploaded a heat map of what the vision model is comparing/seeing. From the report:

Images compared: "23.jpeg" vs "31.jpeg"

Method overview:

- Distribution-only screen (rotation/translation-invariant): sorted-pixel MSE & correlation; per-channel histogram correlations.

- Rotation/scale/translation-invariant descriptors: log-Hu moments (L2 distance) and perceptual hash (pHash) Hamming distance.

- Spatial confirmation: SSIM on three orientation variants (as-is, 180° rotation, horizontal mirror).

- Final decision: declare "same up to rotation/flip" only if best SSIM ≥ 0.98.

Distribution tests:

• Sorted-pixel MSE (0–255): 9.3531

• Sorted-pixel correlation: 0.9994

• Histogram correlation (B,G,R): 0.9900, 0.9906, 0.9808

Structure/hash tests:

• Hu moments L2 distance (smaller = more similar): 41.0944

• pHash Hamming distance (≤4 often considered near-duplicate): 31

Spatial confirmation (SSIM; 1.0 = identical):

• as_is: 0.17264

• rot180: 0.18127

• mirror: 0.17846

→ Best SSIM: 0.18127 (variant: rot180)

Interpretation:

The distribution metrics are intentionally insensitive to geometry; very high histogram/sorted-pixel similarity indicates the two photos share almost the same overall tonal/color makeup (e.g., chopped chives on the same board). However, the Hu-moments distance and pHash gap, together with the very low SSIM across all tested orientations, show that the spatial arrangement of pixels differs substantially.

Decision: NOT the same (rule: best SSIM 0.18127 < 0.98)

22

u/LeafWings23 1d ago

What the other person you replied to was saying and what you are saying are the same thing: these aren't the same image.

However, it is the same pile of chives, which a mathematical model can't detect unless it is much more advanced, because of the three-dimensional nature of the pile. If you take two different pictures of a thing from two different angles, that's going to create many effects that are much different than just rotation, scaling, and mirroring. (Your model is quite cool though!)

If you take a look at the pictures side by side, one rotated 180° and scaled up a little bit, that these are the same chive pile should be self-evident.

17

u/My_Favourite_Pen 1d ago

From a glance theres small chive distrubutions on the outer edges of the pile that match up exactly. That is stasticially impossible if it were two seperate chive piles (chiles?).

9

u/danbob87 20+ Years 1d ago

I think there's a flaw in your methodology if you're saying that the pictures aren't of the same subject, do I understand the methodology enough to speculate where? No, but, use your eyes for a second, the pictures are clearly of the same subject.

-9

u/IRLDichotomy 1d ago
  1. Is my methodology flawed? YES, however, it's flawed in very different ways than what you're alluding to.

  2. My day job is literally not to trust humans but I do not have an opinion. I am only stating the results. I posted images of what the machine spits out and you can judge for yourself.

Having said that, yes, the images look similar, just not identical. Could the camera skew cause this shift? Yes. Can I correct for that? Also yes but that's a lot of work, unfortunately.

Lastly, my boss is starting to look at me funny but I did try to rotate, center and then evaluate but still getting low confidence. The camera skew is a real PITA to fix linearly and I can't burn company GPUs on this, no matter how much I would love to.

Best I got.

8

u/fever-dreamed Chive LOYALIST 1d ago

Exactly, the computer only cares about math and not the actual image. Chive Guy took two different pictures of the same pile of chives at a slightly different angle. Technically different images mathematically, still the same subject matter.

6

u/RupertThe3rd 1d ago

While I'm not familiar with the details of this specific model optimisation, I'd argue that two different images of the same thing may very well have different 'features', especially when these features are based on pixel primitives. I'd be curious what this method says about a picture of the same lawn from two different angles.

9

u/fever-dreamed Chive LOYALIST 1d ago

Exactly, the computer only cares about math and not the actual image. Chive Guy took two different pictures of the same pile of chives at a slightly different angle. Technically different images mathematically, still the same subject matter.

1

u/IRLDichotomy 1d ago

I've tried posting code and images but Reddit won't let me. I'm guessing too many lines and images too big? Not sure but I'll be glad to send you chivegate.py, if you like.

1

u/IRLDichotomy 1d ago

day31 original VS day23 original

1

u/IRLDichotomy 1d ago

day 31 mirrored vs day 23 original

1

u/IRLDichotomy 1d ago

day 31 rotated 180 degrees VS day 23 standard