View Issue Details

IDProjectCategoryView StatusLast Update
0034961LazarusTAChartpublic2019-02-02 17:33
ReporterMarcin WiazowskiAssigned Towp 
PrioritynormalSeverityminorReproducibilityalways
Status closedResolutionfixed 
Product Version2.1 (SVN)Product Build60234 
Target Version2.0Fixed in Version 
Summary0034961: TAChart: finite precision of floating point operations makes pointers/marks invisible
Description    TAChart: finite precision of floating point operations makes pointers/marks invisible



In TChart.CalculateTransformationCoeffs procedure, basing on the current extent (i.e. FCurrentExtent), FScale and FOffset factors are calculated; then they are used back, to recalculate the extent (i.e. results are written back to FCurrentExtent). So, ideally, after the calculations, FCurrentExtent should still hold exactly same values (with the exception of case when Proportional = True - in this case either horizontal, or vertical range becomes even larger).



But, due to finite precision of floating point operations and the Double type, FScale may be calculated in such way, that new FCurrentExtent's bounds are more tight than in the original FCurrentExtent - for example:

    original FCurrentExtent.a.Y = -10
    original FCurrentExtent.b.Y = 21

    new FCurrentExtent.a.Y = -10
    new FCurrentExtent.b.Y = 20.999999999999996

Initially, with default axes' settings, FCurrentExtent is initialized in such way, that all series' data points are visible. Then the extent may become even larger, to make an additional space for margins, marks, etc. - but it may never become smaller. But, due to lack of floating point precision, in practice it can become smaller - so, as in the example above, data point having its Y = 21 will be excluded from the chart's viewport and will not be presented to the user (or, in case of bar series, data point's mark will not be presented).

To solve the problem, FScale must be adjusted, and then the extent must be calculated again. Since the Double variable can hold up to 15-16 significant decimal digits, it's enough to adjust the FScale variable by the magnitude of one or two least significant digits - this can be done by dividing it by 1.000000000000001. But multiplying is faster than dividing, so it's even better to multiply the variable by 1 / 1.000000000000001 = 0.999999999999999 instead (please note, that this is a greatest value less than 1, that can be represented by using 15 significant decimal digits).



At the attached animation it can be observed that, for some FClipRect's heights (for example 202), lower FCurrentExtent's bound is -9.999999999999998 instead of -10, so first bar's mark is invisible (because TChart.IsPointInViewPort function tells that -10 is outside the extent). For some other FClipRect's heights (for example 204), higher FCurrentExtent's bound is 20.999999999999996 instead of 21, so second bar's mark is invisible (because TChart.IsPointInViewPort function tells that 21 is outside the extent).



The attached patch adjusts the FScale, recalculates the FOffset, and finally recalculates the extent and validates it again.

Some notes: currently (i.e. without a patch), at the end of the TChart.CalculateTransformationCoeffs function, FCurrentExtent is overwritten with new values - this is done inside of the UpdateMinMax calls. But, since new values must NOT be written to FCurrentExtent before validation, I edited the UpdateMinMax procedure to write its results not directly to FCurrentExtent, but to a temporary UpdatedExtent variable. For this reason, I renamed "UpdateMinMax" to "CalcUpdatedMinMax" (I thought that it would be more adequate not to use the "update" term). Due to this change in the UpdateMinMax procedure, the TAxisCoeffHelper's FMin and FMax variables no longer need to be pointers, so I changed them just to hold normal values. I also needed to perform same boolean comparison, that is currently used in the TAxisCoeffHelper.CalcScale function, so I introduced a FInputRangeIsInvalid variable, that is initialized in TAxisCoeffHelper.Init and used both by the TAxisCoeffHelper.CalcScale function and my code.


Regards
TagsNo tags attached.
Fixed in Revision60235, 60276
LazTarget2.0
WidgetsetWin32/Win64
Attached Files
  • Reproduce.zip (3,702 bytes)
  • patch.diff (8,622 bytes)
    Index: components/tachart/tachartaxis.pas
    ===================================================================
    --- components/tachart/tachartaxis.pas	(revision 60234)
    +++ components/tachart/tachartaxis.pas	(working copy)
    @@ -245,15 +245,16 @@
     
       TAxisCoeffHelper = object
         FAxisIsFlipped: Boolean;
    +    FInputRangeIsInvalid: Boolean;
         FImageLo, FImageHi: Integer;
         FLo, FHi: Integer;
    -    FMin, FMax: PDouble;
    +    FMin, FMax: Double;
         function CalcOffset(AScale: Double): Double;
         function CalcScale(ASign: Integer): Double;
         constructor Init(AAxis: TChartAxis; AImageLo, AImageHi: Integer;
           AMarginLo, AMarginHi, ARequiredMarginLo, ARequiredMarginHi, AMinDataSpace: Integer;
    -      AHasMarksInMargin, AAxisIsVertical: Boolean; AMin, AMax: PDouble);
    -    procedure UpdateMinMax(AConv: TAxisConvFunc);
    +      AHasMarksInMargin, AAxisIsVertical: Boolean; AMin, AMax: Double);
    +    procedure CalcUpdatedMinMax(AConv: TAxisConvFunc; out AMin, AMax: Double);
       end;
     
       procedure SideByAlignment(
    @@ -1246,7 +1247,7 @@
     
     constructor TAxisCoeffHelper.Init(AAxis: TChartAxis; AImageLo, AImageHi: Integer;
       AMarginLo, AMarginHi, ARequiredMarginLo, ARequiredMarginHi, AMinDataSpace: Integer;
    -  AHasMarksInMargin, AAxisIsVertical: Boolean; AMin, AMax: PDouble);
    +  AHasMarksInMargin, AAxisIsVertical: Boolean; AMin, AMax: Double);
     begin
       FAxisIsFlipped := (AAxis <> nil) and AAxis.IsFlipped;
       FImageLo := AImageLo;
    @@ -1267,14 +1268,21 @@
               ARequiredMarginLo, ARequiredMarginHi, AMinDataSpace);
         end;
       end;
    +
    +  if AAxisIsVertical then
    +    // FInputRangeIsInvalid:= (FMax <= FMin) or (Sign(FHi - FLo) <> -1{ASign})
    +    FInputRangeIsInvalid:= (FMax <= FMin) or (FHi >= FLo)
    +  else
    +    // FInputRangeIsInvalid:= (FMax <= FMin) or (Sign(FHi - FLo) <> 1{ASign});
    +    FInputRangeIsInvalid:= (FMax <= FMin) or (FLo >= FHi);
     end;
     
     function TAxisCoeffHelper.CalcScale(ASign: Integer): Double;
     begin
    -  if (FMax^ <= FMin^) or (Sign(FHi - FLo) <> ASign) then
    +  if FInputRangeIsInvalid then
         Result := ASign
       else
    -    Result := (FHi - FLo) / (FMax^ - FMin^);
    +    Result := (FHi - FLo) / (FMax - FMin);
       if FAxisIsFlipped then
         Result := -Result;
     end;
    @@ -1281,15 +1289,15 @@
     
     function TAxisCoeffHelper.CalcOffset(AScale: Double): Double;
     begin
    -  Result := ((FLo + FHi) - AScale * (FMin^ + FMax^)) * 0.5;
    +  Result := ((FLo + FHi) - AScale * (FMin + FMax)) * 0.5;
     end;
     
    -procedure TAxisCoeffHelper.UpdateMinMax(AConv: TAxisConvFunc);
    +procedure TAxisCoeffHelper.CalcUpdatedMinMax(AConv: TAxisConvFunc; out AMin, AMax: Double);
     begin
    -  FMin^ := AConv(FImageLo);
    -  FMax^ := AConv(FImageHi);
    +  AMin := AConv(FImageLo);
    +  AMax := AConv(FImageHi);
       if FAxisIsFlipped then
    -    Exchange(FMin^, FMax^);
    +    Exchange(AMin, AMax);
     end;
     
     procedure SkipObsoleteAxisProperties;
    Index: components/tachart/tagraph.pas
    ===================================================================
    --- components/tachart/tagraph.pas	(revision 60234)
    +++ components/tachart/tagraph.pas	(working copy)
    @@ -590,30 +590,103 @@
       const AMinDataSpace: Integer);
     var
       rX, rY: TAxisCoeffHelper;
    +  Tries: Integer;
    +  UpdatedExtent: TDoubleRect;
    +  UpdatedExtentIsValid: Boolean = True;
     begin
       rX.Init(
         HorAxis, FClipRect.Left, FClipRect.Right, AMargin.Left, -AMargin.Right,
         AChartMargins.Left, AChartMargins.Right, AMinDataSpace,
         (AMargin.Left <> AChartMargins.Left) or (AMargin.Right <> AChartMargins.Right),
    -    false, @FCurrentExtent.a.X, @FCurrentExtent.b.X);
    +    false, FCurrentExtent.a.X, FCurrentExtent.b.X);
       rY.Init(
         VertAxis, FClipRect.Bottom, FClipRect.Top, -AMargin.Bottom, AMargin.Top,
         AChartMargins.Bottom, AChartMargins.Top, AMinDataSpace,
         (AMargin.Top <> AChartMargins.Top) or (AMargin.Bottom <> AChartMargins.Bottom),
    -    true, @FCurrentExtent.a.Y, @FCurrentExtent.b.Y);
    +    true, FCurrentExtent.a.Y, FCurrentExtent.b.Y);
     
       FScale.X := rX.CalcScale(1);
       FScale.Y := rY.CalcScale(-1);
    -  if Proportional then begin
    -    if Abs(FScale.X) > Abs(FScale.Y) then
    -      FScale.X := Abs(FScale.Y) * Sign(FScale.X)
    -    else
    -      FScale.Y := Abs(FScale.X) * Sign(FScale.Y);
    +
    +  {Theoretically, this could be a "while True" loop - but, to be on the safe side,
    +   we limit the number of iterations, to avoid hanging forever in case of some
    +   unexpected condition (since it's better to have some data points excluded from
    +   the chart's viewport, than hang); for max number of iterations used here, see
    +   the comment below}
    +  for Tries := 1 to 100 do
    +  begin
    +    if Proportional then
    +      if Abs(FScale.X) > Abs(FScale.Y) then
    +        FScale.X := Abs(FScale.Y) * Sign(FScale.X)
    +      else
    +        FScale.Y := Abs(FScale.X) * Sign(FScale.Y);
    +    FOffset.X := rX.CalcOffset(FScale.X);
    +    FOffset.Y := rY.CalcOffset(FScale.Y);
    +    rX.CalcUpdatedMinMax(@XImageToGraph, UpdatedExtent.a.X, UpdatedExtent.b.X);
    +    rY.CalcUpdatedMinMax(@YImageToGraph, UpdatedExtent.a.Y, UpdatedExtent.b.Y);
    +
    +    {Basing on the input extent (i.e. FCurrentExtent), FScale and FOffset
    +     factors are calculated; then they are used back, to recalculate the extent
    +     (i.e. UpdatedExtent). So, ideally, UpdatedExtent would always be equal to
    +     FCurrentExtent (with the exception of case when Proportional = True -
    +     in this case either horizontal, or vertical range becomes even larger).
    +
    +     But, due to finite precision of floating point operations and the Double
    +     type, FScale may be calculated in such way, that UpdatedExtent's bounds
    +     are more tight than in the original FCurrentExtent - for example:
    +
    +       FCurrentExtent.a.Y = -10
    +       FCurrentExtent.b.Y = 21
    +
    +       UpdatedExtent.a.Y  = -10
    +       UpdatedExtent.b.Y  = 20.999999999999996
    +
    +     Initially, with default axes' settings, FCurrentExtent is initialized in
    +     such way, that all series' data points are visible. Then the extent may
    +     become even larger, to make an additional space for margins, marks, etc. -
    +     but it may never become smaller. But, due to lack of floating point
    +     precision, in practice it can become smaller - so, as in the example
    +     above, data point having its Y = 21 will be excluded from the chart's
    +     viewport and will not be presented to the user (or, in case of bar series,
    +     data point's mark will not be presented).
    +
    +     To solve the problem, FScale must be adjusted, FOffset must be recalculated,
    +     and then also UpdatedExtent must be recalculated. Since the Double variable
    +     can hold up to 15-16 significant decimal digits, it's enough to adjust the
    +     FScale variable by the magnitude of one or two least significant digits -
    +     this can be done by dividing it by 1.000000000000001. But multiplying is
    +     faster than dividing, so it's even better to multiply the variable by
    +     1 / 1.000000000000001 = 0.999999999999999 instead - please note, that this
    +     is a greatest value less than 1, that can be represented by using 15
    +     significant decimal digits.
    +
    +     One or two loop iterations should be enough, but - to overcome potential
    +     consecutive problems with lack of the floating point precision - we try
    +     up to 100 times; this effectively scales the initial FScale value by a
    +     0.999999999999999 ^ 100 = 0.9999999999999 factor - which means, that we
    +     adjusted the initial value by the magnitude of three or four least
    +     significant digits; this is in fact overkill - so we can assume, that
    +     reaching 100th iteration means that something is completely wrong and
    +     we'll never achieve success}
    +    UpdatedExtentIsValid := True;
    +    if (not rX.FInputRangeIsInvalid) and
    +       ((UpdatedExtent.a.X > FCurrentExtent.a.X) or
    +        (UpdatedExtent.b.X < FCurrentExtent.b.X)) then begin
    +      FScale.X *= 0.999999999999999;
    +      UpdatedExtentIsValid := False;
    +    end;
    +    if (not rY.FInputRangeIsInvalid) and
    +       ((UpdatedExtent.a.Y > FCurrentExtent.a.Y) or
    +        (UpdatedExtent.b.Y < FCurrentExtent.b.Y)) then begin
    +      FScale.Y *= 0.999999999999999;
    +      UpdatedExtentIsValid := False;
    +    end;
    +
    +    if UpdatedExtentIsValid then
    +      Break;
       end;
    -  FOffset.X := rX.CalcOffset(FScale.X);
    -  FOffset.Y := rY.CalcOffset(FScale.Y);
    -  rX.UpdateMinMax(@XImageToGraph);
    -  rY.UpdateMinMax(@YImageToGraph);
    +  Assert(UpdatedExtentIsValid, 'Impossible happened in TChart.CalculateTransformationCoeffs() - try to reproduce the problem and report a bug');
    +  FCurrentExtent := UpdatedExtent;
     end;
     
     procedure TChart.Clear(ADrawer: IChartDrawer; const ARect: TRect);
    
    patch.diff (8,622 bytes)
  • Animation.gif (67,495 bytes)
    Animation.gif (67,495 bytes)

Activities

Marcin Wiazowski

2019-01-28 03:03

reporter  

Reproduce.zip (3,702 bytes)

Marcin Wiazowski

2019-01-28 03:04

reporter  

patch.diff (8,622 bytes)
Index: components/tachart/tachartaxis.pas
===================================================================
--- components/tachart/tachartaxis.pas	(revision 60234)
+++ components/tachart/tachartaxis.pas	(working copy)
@@ -245,15 +245,16 @@
 
   TAxisCoeffHelper = object
     FAxisIsFlipped: Boolean;
+    FInputRangeIsInvalid: Boolean;
     FImageLo, FImageHi: Integer;
     FLo, FHi: Integer;
-    FMin, FMax: PDouble;
+    FMin, FMax: Double;
     function CalcOffset(AScale: Double): Double;
     function CalcScale(ASign: Integer): Double;
     constructor Init(AAxis: TChartAxis; AImageLo, AImageHi: Integer;
       AMarginLo, AMarginHi, ARequiredMarginLo, ARequiredMarginHi, AMinDataSpace: Integer;
-      AHasMarksInMargin, AAxisIsVertical: Boolean; AMin, AMax: PDouble);
-    procedure UpdateMinMax(AConv: TAxisConvFunc);
+      AHasMarksInMargin, AAxisIsVertical: Boolean; AMin, AMax: Double);
+    procedure CalcUpdatedMinMax(AConv: TAxisConvFunc; out AMin, AMax: Double);
   end;
 
   procedure SideByAlignment(
@@ -1246,7 +1247,7 @@
 
 constructor TAxisCoeffHelper.Init(AAxis: TChartAxis; AImageLo, AImageHi: Integer;
   AMarginLo, AMarginHi, ARequiredMarginLo, ARequiredMarginHi, AMinDataSpace: Integer;
-  AHasMarksInMargin, AAxisIsVertical: Boolean; AMin, AMax: PDouble);
+  AHasMarksInMargin, AAxisIsVertical: Boolean; AMin, AMax: Double);
 begin
   FAxisIsFlipped := (AAxis <> nil) and AAxis.IsFlipped;
   FImageLo := AImageLo;
@@ -1267,14 +1268,21 @@
           ARequiredMarginLo, ARequiredMarginHi, AMinDataSpace);
     end;
   end;
+
+  if AAxisIsVertical then
+    // FInputRangeIsInvalid:= (FMax <= FMin) or (Sign(FHi - FLo) <> -1{ASign})
+    FInputRangeIsInvalid:= (FMax <= FMin) or (FHi >= FLo)
+  else
+    // FInputRangeIsInvalid:= (FMax <= FMin) or (Sign(FHi - FLo) <> 1{ASign});
+    FInputRangeIsInvalid:= (FMax <= FMin) or (FLo >= FHi);
 end;
 
 function TAxisCoeffHelper.CalcScale(ASign: Integer): Double;
 begin
-  if (FMax^ <= FMin^) or (Sign(FHi - FLo) <> ASign) then
+  if FInputRangeIsInvalid then
     Result := ASign
   else
-    Result := (FHi - FLo) / (FMax^ - FMin^);
+    Result := (FHi - FLo) / (FMax - FMin);
   if FAxisIsFlipped then
     Result := -Result;
 end;
@@ -1281,15 +1289,15 @@
 
 function TAxisCoeffHelper.CalcOffset(AScale: Double): Double;
 begin
-  Result := ((FLo + FHi) - AScale * (FMin^ + FMax^)) * 0.5;
+  Result := ((FLo + FHi) - AScale * (FMin + FMax)) * 0.5;
 end;
 
-procedure TAxisCoeffHelper.UpdateMinMax(AConv: TAxisConvFunc);
+procedure TAxisCoeffHelper.CalcUpdatedMinMax(AConv: TAxisConvFunc; out AMin, AMax: Double);
 begin
-  FMin^ := AConv(FImageLo);
-  FMax^ := AConv(FImageHi);
+  AMin := AConv(FImageLo);
+  AMax := AConv(FImageHi);
   if FAxisIsFlipped then
-    Exchange(FMin^, FMax^);
+    Exchange(AMin, AMax);
 end;
 
 procedure SkipObsoleteAxisProperties;
Index: components/tachart/tagraph.pas
===================================================================
--- components/tachart/tagraph.pas	(revision 60234)
+++ components/tachart/tagraph.pas	(working copy)
@@ -590,30 +590,103 @@
   const AMinDataSpace: Integer);
 var
   rX, rY: TAxisCoeffHelper;
+  Tries: Integer;
+  UpdatedExtent: TDoubleRect;
+  UpdatedExtentIsValid: Boolean = True;
 begin
   rX.Init(
     HorAxis, FClipRect.Left, FClipRect.Right, AMargin.Left, -AMargin.Right,
     AChartMargins.Left, AChartMargins.Right, AMinDataSpace,
     (AMargin.Left <> AChartMargins.Left) or (AMargin.Right <> AChartMargins.Right),
-    false, @FCurrentExtent.a.X, @FCurrentExtent.b.X);
+    false, FCurrentExtent.a.X, FCurrentExtent.b.X);
   rY.Init(
     VertAxis, FClipRect.Bottom, FClipRect.Top, -AMargin.Bottom, AMargin.Top,
     AChartMargins.Bottom, AChartMargins.Top, AMinDataSpace,
     (AMargin.Top <> AChartMargins.Top) or (AMargin.Bottom <> AChartMargins.Bottom),
-    true, @FCurrentExtent.a.Y, @FCurrentExtent.b.Y);
+    true, FCurrentExtent.a.Y, FCurrentExtent.b.Y);
 
   FScale.X := rX.CalcScale(1);
   FScale.Y := rY.CalcScale(-1);
-  if Proportional then begin
-    if Abs(FScale.X) > Abs(FScale.Y) then
-      FScale.X := Abs(FScale.Y) * Sign(FScale.X)
-    else
-      FScale.Y := Abs(FScale.X) * Sign(FScale.Y);
+
+  {Theoretically, this could be a "while True" loop - but, to be on the safe side,
+   we limit the number of iterations, to avoid hanging forever in case of some
+   unexpected condition (since it's better to have some data points excluded from
+   the chart's viewport, than hang); for max number of iterations used here, see
+   the comment below}
+  for Tries := 1 to 100 do
+  begin
+    if Proportional then
+      if Abs(FScale.X) > Abs(FScale.Y) then
+        FScale.X := Abs(FScale.Y) * Sign(FScale.X)
+      else
+        FScale.Y := Abs(FScale.X) * Sign(FScale.Y);
+    FOffset.X := rX.CalcOffset(FScale.X);
+    FOffset.Y := rY.CalcOffset(FScale.Y);
+    rX.CalcUpdatedMinMax(@XImageToGraph, UpdatedExtent.a.X, UpdatedExtent.b.X);
+    rY.CalcUpdatedMinMax(@YImageToGraph, UpdatedExtent.a.Y, UpdatedExtent.b.Y);
+
+    {Basing on the input extent (i.e. FCurrentExtent), FScale and FOffset
+     factors are calculated; then they are used back, to recalculate the extent
+     (i.e. UpdatedExtent). So, ideally, UpdatedExtent would always be equal to
+     FCurrentExtent (with the exception of case when Proportional = True -
+     in this case either horizontal, or vertical range becomes even larger).
+
+     But, due to finite precision of floating point operations and the Double
+     type, FScale may be calculated in such way, that UpdatedExtent's bounds
+     are more tight than in the original FCurrentExtent - for example:
+
+       FCurrentExtent.a.Y = -10
+       FCurrentExtent.b.Y = 21
+
+       UpdatedExtent.a.Y  = -10
+       UpdatedExtent.b.Y  = 20.999999999999996
+
+     Initially, with default axes' settings, FCurrentExtent is initialized in
+     such way, that all series' data points are visible. Then the extent may
+     become even larger, to make an additional space for margins, marks, etc. -
+     but it may never become smaller. But, due to lack of floating point
+     precision, in practice it can become smaller - so, as in the example
+     above, data point having its Y = 21 will be excluded from the chart's
+     viewport and will not be presented to the user (or, in case of bar series,
+     data point's mark will not be presented).
+
+     To solve the problem, FScale must be adjusted, FOffset must be recalculated,
+     and then also UpdatedExtent must be recalculated. Since the Double variable
+     can hold up to 15-16 significant decimal digits, it's enough to adjust the
+     FScale variable by the magnitude of one or two least significant digits -
+     this can be done by dividing it by 1.000000000000001. But multiplying is
+     faster than dividing, so it's even better to multiply the variable by
+     1 / 1.000000000000001 = 0.999999999999999 instead - please note, that this
+     is a greatest value less than 1, that can be represented by using 15
+     significant decimal digits.
+
+     One or two loop iterations should be enough, but - to overcome potential
+     consecutive problems with lack of the floating point precision - we try
+     up to 100 times; this effectively scales the initial FScale value by a
+     0.999999999999999 ^ 100 = 0.9999999999999 factor - which means, that we
+     adjusted the initial value by the magnitude of three or four least
+     significant digits; this is in fact overkill - so we can assume, that
+     reaching 100th iteration means that something is completely wrong and
+     we'll never achieve success}
+    UpdatedExtentIsValid := True;
+    if (not rX.FInputRangeIsInvalid) and
+       ((UpdatedExtent.a.X > FCurrentExtent.a.X) or
+        (UpdatedExtent.b.X < FCurrentExtent.b.X)) then begin
+      FScale.X *= 0.999999999999999;
+      UpdatedExtentIsValid := False;
+    end;
+    if (not rY.FInputRangeIsInvalid) and
+       ((UpdatedExtent.a.Y > FCurrentExtent.a.Y) or
+        (UpdatedExtent.b.Y < FCurrentExtent.b.Y)) then begin
+      FScale.Y *= 0.999999999999999;
+      UpdatedExtentIsValid := False;
+    end;
+
+    if UpdatedExtentIsValid then
+      Break;
   end;
-  FOffset.X := rX.CalcOffset(FScale.X);
-  FOffset.Y := rY.CalcOffset(FScale.Y);
-  rX.UpdateMinMax(@XImageToGraph);
-  rY.UpdateMinMax(@YImageToGraph);
+  Assert(UpdatedExtentIsValid, 'Impossible happened in TChart.CalculateTransformationCoeffs() - try to reproduce the problem and report a bug');
+  FCurrentExtent := UpdatedExtent;
 end;
 
 procedure TChart.Clear(ADrawer: IChartDrawer; const ARect: TRect);
patch.diff (8,622 bytes)

Marcin Wiazowski

2019-01-28 03:04

reporter  

Animation.gif (67,495 bytes)
Animation.gif (67,495 bytes)

wp

2019-01-28 10:19

developer   ~0113690

I think it is not the correct way to fix a local problem by changing parameters which affect the entire library, even when the change is very small.

Debugging the issue by myself I found the critical point in TChart.IsPointInViewPort which calls the Math function InRange:

  function InRange(const AValue, AMin, AMax: Double): Boolean;
  begin
    Result:=(AValue>=AMin) and (AValue<=AMax);
  end;

The '=' operator here represents a typical floating point problem: floating point numbers may be "equal" even when the '=' operator says they are not. Better to use the SameValue function:

  function NewInRange(const AValue, AMin, AMax: Double): Boolean;
  begin
    Result:=InRange(AValue, AMin, AMax) or SameValue(AValue, AMin) or SameValue(AValue, AMax);
  end;

However, there are cases, when this new function can cause trouble, for example when testing the argument x of the function arcsin(x) (where -1 <= x <= 1): The new function would be inappropriate when x turns out to be -1.000000000000001 while the usual InRange would be better.

Therefore, I added to TAMath a new function SafeInRangeWithBounds (in addition to the already existing SafeInRange) which so far is called only from TChart.IsPointInViewPort.

Your test program seems to run correctly with this change. If it still fails the SafeInRangeWithBounds should be extended by an accuracy parameter.

Marcin Wiazowski

2019-01-28 17:14

reporter   ~0113698

Well, this is in fact exactly, what I initially tried!


function TChart.IsPointInViewPort(const AP: TDoublePoint): Boolean;
const
  FLOATFIX = 10E-12;
begin
  Result :=
    not IsNan(AP) and
    InRange(AP.X, XGraphMin - FLOATFIX * Abs(XGraphMin), XGraphMax + FLOATFIX * Abs(XGraphMax)) and
    InRange(AP.Y, YGraphMin - FLOATFIX * Abs(YGraphMin), YGraphMax + FLOATFIX * Abs(YGraphMax));
end;


What happened later:

1) I realized that the problem is because XGraphMin / XGraphMax / YGraphMin / YGraphMax properties (which effectively refer to FCurrentExtent) have wrong values. And wrong values in FCurrentExtent are highly problematic, since FCurrentExtent is available through XGraphMin / XGraphMax / YGraphMin / YGraphMax and also CurrentExtent properties - all of them are public, so every chart's user can get wrong values. In addition, wrong CurrentExtent contents are internally used in:
* TChart.PrepareAxis
* TBasicPointSeries.UpdateMargins
* TLineSeries.Draw
* TLineSeries.DrawSingleLineInStack
* TManhattanSeries.Draw
* TBarSeries.Draw
* TAreaSeries.Draw
* TDrawFuncHelper.Create
* TCubicSplineSeries.Draw
* TFitSeries.Draw
* TCustomColorMapSeries.Draw
* TBubbleSeries.Draw
* TBoxAndWhiskerSeries.Draw
* TOpenHighLowCloseSeries.Draw
* TFieldSeries.Draw
* TDataPointTool.FindNearestPoint

So I realized, that I'm not able to fix all these places with a solution like FLOATFIX above.


2) Then I realized that FCurrentExtent has wrong values, because FScale.X/Y has wrong values (due to finite floating point precision). And wrong values in FScale are problematic not only because they affect FCurrentExtent's contents, but also because they are read by using a public RenderingParams property - so, again, every chart's user can get wrong values (and, of course, I'm not able to force him to use a solution like FLOATFIX).



So I finally realized, that my solution with FLOATFIX in TChart.IsPointInViewPort is in fact only masking the problem at a very high level, far away from the root cause of the problem. And also that solving the root problem will save us from experiencing many potential problems in all these (* TChart.PrepareAxis .. * TDataPointTool.FindNearestPoint) methods listed above.

I can also see an additional advantage of setting FScale correctly - it can be corrected with the largest possible precision, i.e. it can get smallest possible value, that is correct. When widening a range in TChart.IsPointInViewPort, I was forced to use some arbitrary scaling factor like FLOATFIX = 10E-12, which, obviously, introduces additional inaccuracy.



So my suggestion is: apply the patch, but add an additional "Break" clause:

  for Tries := 1 to 100 do
  begin
    if Proportional then
      if Abs(FScale.X) > Abs(FScale.Y) then
        FScale.X := Abs(FScale.Y) * Sign(FScale.X)
      else
        FScale.Y := Abs(FScale.X) * Sign(FScale.Y);
    FOffset.X := rX.CalcOffset(FScale.X);
    FOffset.Y := rY.CalcOffset(FScale.Y);
    rX.CalcUpdatedMinMax(@XImageToGraph, UpdatedExtent.a.X, UpdatedExtent.b.X);
    rY.CalcUpdatedMinMax(@YImageToGraph, UpdatedExtent.a.Y, UpdatedExtent.b.Y);

HERE:
    Break; // Remove when tested well



This way, you will get exactly same execution path as it is now - and, after releasing final Lazarus 2.0, "Break" can be removed - there will be a lot of time for testing then.

Regards

wp

2019-01-28 17:55

developer   ~0113699

Last edited: 2019-01-28 17:58

View 2 revisions

Sorry, I don't understand you. What's wrong with CurrentExtent? What's wrong with Scale? The difference between 21 and 20.999999999999996? They are floating point numbers, and thus almost never are "exact". Please be more specific what's wrong with all these Draw methods that you specify above. If they use the '=' operator for floating point comparisons somewhere these have to be fixed, but not tweaking the Scale value. An "if x=y" with x and y being floats will almost always be false, even when x and y seem to be equal.

Please reopen an already resolved report and set its status to "feedback" if you have a comment. Otherwise it is too easy for me to overlook this message.

Marcin Wiazowski

2019-01-29 21:56

reporter   ~0113723

Ok - I'll try to be more clear. So let's start from the beginning: I was trying to understand intentions of TAChart authors - please correct me if I'm wrong.


Some note first: I'm sure that we are both aware of consequences of using floating point numbers / operations - in particular, that using a "=" operator will most probably lead us to trouble, and using "<=" or ">=" may still be not enough. I'm also sure, that original TAChart authors are also aware of this. Having this in mind, we can use two solutions to check if some value is in range:


case A) When it is guaranteed, that range bounds are calculated precisely (see below):

  if (X >= RangeMin) and (X <= RangeMax) then ...


case B) When there is no guarantee, that range bounds are calculated precisely:

  if (X >= RangeMin - SOME_ARBITRARY_MARGIN) and (X <= RangeMax + SOME_ARBITRARY_MARGIN) then ...


When can we assume, that range is calculated precisely? When it has been obtained by using Min() and Max() operators, called on all range members - for example: we have data points at X1 = 1.234, X2 = 5.678 and X3 = 9, so range covering all of them will be: RangeMin = 1.234 and RangeMax = 9. In this case, RangeMin and RangeMax variables just hold (in their 8-byte memory) binary copies of X1 and X3 variables, so comparing them cannot fail - i.e. "X1 >= RangeMin" will always return True, similarly to "X3 <= RangeMax".


In TAChart, the initial range (i.e. initial FCurrentExtent) is calculated in TChart.GetFullExtent() function - more precisely in an internal JoinBounds() procedure:

  procedure JoinBounds(const ABounds: TDoubleRect);
  begin
    with Result do begin
      a.X := Min(a.X, ABounds.a.X);
      b.X := Max(b.X, ABounds.b.X);
      a.Y := Min(a.Y, ABounds.a.Y);
      b.Y := Max(b.Y, ABounds.b.Y);
    end;
  end;

The Min() and Max() functions return binary contents of either first, or second parameter, so initial FCurrentExtent is calculated precisely - so this IsPointInViewPort() implementation can be safely used in this case:

  function TChart.IsPointInViewPort(const AP: TDoublePoint): Boolean;
  begin
    Result :=
      not IsNan(AP) and
      InRange(AP.X, FCurrentExtent.a.X, FCurrentExtent.b.X) and InRange(AP.Y, FCurrentExtent.a.Y, FCurrentExtent.b.Y);
  end;

Currently, XGraphMin, XGraphMax, YGraphMin and YGraphMax properties are used, but they redirect to FCurrentExtent, so this code is equivalent:

  function TChart.IsPointInViewPort(const AP: TDoublePoint): Boolean;
  begin
    Result :=
      not IsNan(AP) and
      InRange(AP.X, XGraphMin, XGraphMax) and InRange(AP.Y, YGraphMin, YGraphMax);
  end;

I think that TAChart authors designed this in this way intentionally: FCurrentExtent can be trusted to include all the range members (all data points), so simple "<=" and ">=" comparisons can be used in the whole chart's code.



Initially, current extent (these days, current extent was split into 4 variables: FXGraphMin, FXGraphMax, FYGraphMin and FYGraphMax, and was not directly tied to the FCurrentExtent variable) has been used to calculate FScale and FOffset variables (needed to transform axis units into graph units), but original extent contents have NOT been touched - so everything worked as it was designed.



So what happened later? At r19189, decision has been made to recalculate - in TChart.CalculateTransformationCoeffs() - the current extent, by using FScale (thanks to this change, a Proportional property could be introduced later, at r27069).



As a consequence, current calculations look as follows:

1) FCurrentExtent is calculated (precisely!) - in TChart.GetFullExtent(), by enumerating all data points and using Min() and Max() calls

2) based on FCurrentExtent and current clipping rect, FScale is calculated (NOT precisely) - in TChart.CalculateTransformationCoeffs()

3) based on FScale, FOffset is calculated (NOT precisely) - in TChart.CalculateTransformationCoeffs()

4) (!!!) based on FScale and FOffset, FCurrentExtent is calculated back (NOT precisely) - in TChart.CalculateTransformationCoeffs()

5) this (NOT precisely calculated) FCurrentExtent is used in TChart.IsPointInViewPort() (which still assumed precisely calculated FCurrentExtent) and in other places in code, that I listed in my previous post (that list using "*", with "Draw" functions)



Everything worked fine until point 4) was introduced at r19189, thus disturbing the original, precise FCurrentExtent contents. Consequences are serious: FCurrentExtent may now EXCLUDE some data points, although all the chart's code assumes, that this is impossible. Since, after the change introduced at r19189, authors neither decided to ensure, that FCurrentExtent still includes all the data points (case A above), nor to add some margin to all calculations (case B above), I assume this to be just an omission.



Due to finite precision of calculating FScale in point 2), there are three possibilities:

a) FScale is rounded up, so FCurrentExtent, recalculated in point 4), becomes a bit too tight,
b) FScale has an ideal value,
c) FScale is rounded down, so FCurrentExtent, recalculated in point 4), becomes a bit wider.

Some examples from the attached animation: the original FCurrentExtent has always its Y range equal to -10 .. 21 - then it is recalculated as:

- for height 196 pixels, recalculated Y range is -10 .. 21.000000000000004 (a bit too wide - no problem here, all data points are still inside),
- for height 208 pixels, recalculated Y range is -10 .. 21 (ideal range - no problem here, all data points are still inside),
- for height 209 pixels, recalculated Y range is -9.999999999999998 .. 21 (too tight range - point having Y = -10 is no longer included).

Please note, that there is no theoretical reason of making FCurrentExtent too tight. There is also no practical advantage - only the problem arisen.



Not surprisingly, there are two possible solutions.



Solution 1: All the chart's code, that currently assumes, that FCurrentExtent is guaranteed to include all the data points, must be modified (so the code must NOT make this assumption anymore) - this includes modifying the IsPointInViewPort() function, and also reviewing / modifying all these "Draw" functions that I listed in my previous post (that list using "*").

Advantages:
- simple implementation in case of the IsPointInViewPort() function - it's enough to add some additional margin to calculations

Disadvantages:
- requires touching code, that has been working for years, in many places
- does not fix the root cause of the problem - so FScale, FOffset and FCurrentExtent have still improper values: they can be read by the user, by using public properties - the user may get improper values
- introduces additional inaccuracy (by adding some additional, arbitrary margins/bounds to calculations)
- besides TChart.IsPointInViewPort(), also all the other current chart's code trusts FCurrentExtent contents (referenced to as CurrentExtent property) - i.e. all these "Draw" functions that I listed in my previous post (that list using "*"); in some cases, like in TBubbleSeries.Draw, it's hard to settle, what margins should be added to CurrentExtent, since CurrentExtent is not used for comparisons, but to calculate a clipping rect (so it seems that some pixels on the chart may remain unpainted due to too tight CurrentExtent):

  ext := ParentChart.CurrentExtent;
  clipR.TopLeft := ParentChart.GraphToImage(ext.a);
  clipR.BottomRight := ParentChart.GraphToImage(ext.b);
  NormalizeRect(clipR);
  ADrawer.ClippingStart(clipR);

- it's error-prone: every future usage of the FCurrentExtent record must use some additional, arbitrary margins for calculations, otherwise results may be unexpected
- not a big problem, but: widening the range in each comparison makes the execution a bit slower


Solution 2: the cause of the problem can be fixed by ensuring, that FCurrentExtent includes all the data points again (which can be achieved by adjusting FScale's value, if rounded up).

Advantages:
- all the FScale, FOffset and FCurrentExtent have proper values again
- no other, currently existing, chart's code needs to be modified
- lack of other disadvantages listed above

Disadvantages:
- requires touching the code, that has been working for years



I'll try to answer your questions (I'll be probably repeating myself, but I want to be as clear as possible):



> What's wrong with CurrentExtent?

The problem is that all the current chart's code assumes, that CurrentExtent (FCurrentExtent) includes all the data points - although, since r19189, this assumption is sometimes (in unlucky cases) not true. This makes some points or marks invisible, and probably causes also some other problems, not diagnosed yet (in these "Draw" functions that I listed in my previous post).

In particular, since XGraphMin, XGraphMax, YGraphMin and YGraphMax are currently in fact references to FCurrentExtent, the TChart.IsPointInViewPort() function can be written as:

  function TChart.IsPointInViewPort(const AP: TDoublePoint): Boolean;
  begin
    Result :=
      not IsNan(AP) and
      InRange(AP.X, FCurrentExtent.a.X, FCurrentExtent.b.X) and InRange(AP.Y, FCurrentExtent.a.Y, FCurrentExtent.b.Y);
  end;

This shows, that improperly recalculated FCurrentExtent may make some data points excluded, although these points are valid and should remain within the range. This can be fixed by adding some margins to FCurrentExtent in IsPointInViewPort() - but also many other chart's functions use FCurrentExtent (CurrentExtent), so they can still manifest some improper behavior.



> What's wrong with Scale?

Scale (FScale) is used to calculate FOffset, and then finally to recalculate FCurrentExtent, so having improper (rounded up, a bit too large) FScale leads to having too tight FCurentExtent. In addition, having improper FScale leads to giving the chart's user improper scale value, when he uses a RenderingParams property.



> The difference between 21 and 20.999999999999996? They are floating point numbers, and thus almost never are "exact".

This is exactly as you say. Please take a look at the animation attached. In most cases, FCurrentExtent's range is a bit wider than it theoretically should - for example -10 .. 21.000000000000004. Sometimes its also ideal: -10 .. 21. And, unfortunately, sometimes it's more tight than it should be (so marks disappear). The solution, that I'm proposing, brings FCurrentExtent's range back to the ideal, or a bit wider, range - for example from -10 .. 20.999999999999996 to -10.000000000000014 .. 21.000000000000014 - so all the data points are included again (this is done be correcting FScale, if it has been rounded up during calculations); nothing is touched in cases when FCurrentExtent's range already contains all the data points. Currently, a bit wider (than the ideal) range is used by the chart in about 40-60% cases of painting, so I can't see big danger when changing also too tight range to an ideal or a bit too wide range.



> Please be more specific what's wrong with all these Draw methods that you specify above.

All of these methods use FCurrentExtent (CurrentExtent) and trust, that it includes all the data points - which is currently not always true. So either FCurrentExtent must be corrected (Option 2 above), or all of these functions must be reviewed / corrected to use some additional margin in calculations (which simulates that all the points are contained in FCurrentExtent - Option 1 above).



> If they use the '=' operator for floating point comparisons somewhere these have to be fixed, but not tweaking the Scale value.

Yes, they use "=", "<=" and ">=" operators - but only because chart has been designed in such way, that FCurrentExtent is generated by using Min() and Max() functions - so "=", "<=" and ">=" operations on range members cannot fail. But later, at r19189, a change has been introduced, that modifies FCurrentExtent a bit (not intentionally - but only due to floating point inaccuracy). There is no problem if FCurrentExtent becomes a bit too wide, but there is a problem when it becomes too tight - in this case, correcting the FScale value brings the FCurrentExtent to a valid range again - so all the chart's code, that uses FCurrentExtent, magically starts working properly again. And there is no need to review and/or modify all the places in code, where FCurrentExtent (CurrentExtent) is used - because it's again guaranteed to contain valid ranges (i.e. to include all the data points).



Please let me know if I was clear enough this time. Thanks for your patience, by the way.

Regards

wp

2019-02-01 12:50

developer   ~0113772

While I was tempted at some time to apply this patch I finally will not do it for the following reasons:

Knowing of round-off errors it is a well-accepted strategy in most places to replace the = operator by the "SameValue()" function of the math unit. This means that every floating point number is assumed to be correct only within a given epsilon, and checking for equality means to see whether the epsilon-ranges of two floating point values overlap. Ignoring this can result in effects like the disappearing labels that you report here: Due to round-off error data points occasionally were no longer found to be inside the viewport given by the chart's CurrentExtent. The correct way to fix them is the make TChart.IsPointInViewPort obey this simple rule.

Your patch, however, expands the CurrentExtent so that all data points are enclosed, and the <= and >= operations used by TChart.IsPointInView are always correct.

But: TChart.IsPointInView is a public function and can be applied to any number. Since this number can be subject to round-off error as well the old issue remains: Is that number inside, outside or at the boundary of the CurrentExtent? Your recalibration of the CurrentExtent does not help here.

And it must be considered that some other code may break your recalibration. When this happens the old bug will be back.

But when your patch does not fix all the cases why is it there at all? You say: "there is no need to review and/or modify all the places in code, where FCurrentExtent (CurrentExtent) is used - because it's again guaranteed to contain valid ranges". Other than being an advantage I see this as a disadvantage of your patch. If there are really other bugs related to this topic something bad will happen without your patch - like the disappearing labels, or maybe disappearing data points - and it will be clear that something is wrong and needs to be fixed. With your patch this particular bug will remain in the library like a time bomb and may come back for some reason.

Marcin Wiazowski

2019-02-01 21:28

reporter   ~0113784

Thanks for your analysis. Your arguments are, well... convincing. Maybe you'll be surprised, but, in fact, I fully agree with you.

Why? Because we both think about the same thing!



> Knowing of round-off errors it is a well-accepted strategy in most places to replace the = operator by the "SameValue()" function of the math unit. This means that every floating point number is assumed to be correct only within a given epsilon [...]

I fully agree. For checking the range, this looks like:

  if (X >= RangeMin - EPSILON) and (X <= RangeMax + EPSILON) then ...

This is a special case, where both EPSILONs are equal. More generally, this will be:

  if (X >= RangeMin - EPSILON_1) and (X <= RangeMax + EPSILON_2) then ...


We just need to obtain EPSILONs' values. How? The most proper answer would be: by using numerical analysis. But it's a big pain, because we need:

- an exact knowledge about math operations (about algorithm) used in calculations - this knowledge is in chart's sources, however, it's a danger here, because every future change in the algorithm will make the error analysis no longer valid,

- we need to know a magnitude of order of values, that are going to be compared. Why? Let's assume for a moment, that the floating point format can hold only 4 significant decimal digits:


Example 1 (no surprises here):

  RangeMin = 56.78 (4 significant digits used)
  EPSILON_1 = 0.01234 (4 significant digits used)

  if (X >= RangeMin - EPSILON_1) and ...

which may be written as:

  if (X >= 56.78 - 0.01234) and ...

which may be written as:

  if (X >= 56.77) and ...


Example 2:

  RangeMin = 56.78 (4 significant digits used)
  EPSILON_1 = 0.0001234 (4 significant digits used)

  if (X >= RangeMin - EPSILON_1) and ...

which may be written as:

  if (X >= 56.78 - 0.0001234) and ...

which may be written as:

  if (X >= 56.78) and ...

so our EPSILON_1 has been completely - in fact - ignored, because: 56.78 - 0.0001234 = 56.7798766 (9 significant digits used) = 56.78 (4 significant digits used).


So EPSILONs can't be chosen arbitrarily, because they will be either too small (and thus ignored), or too large (so, in extreme cases, all the values will be in range). Epsilons must be calculated with respect to RangeMin and RangeMax values. And RangeMin and RangeMax values depend on series' data points - because the user may want to display points having X1 = -10, X2 = 0, X3 = 10, but also X1 = -100000, X2 = 0, X3 = 100000. So we are in trouble here.





> But: TChart.IsPointInView is a public function and can be applied to any number. Since this number can be subject to round-off error as well

Not exactly: if the input number (in fact: data point) comes from the series' data set (which is the "normal" usage of TChart.IsPointInView()), it's represented precisely.




> the old issue remains: Is that number inside, outside or at the boundary of the CurrentExtent? Your recalibration of the CurrentExtent does not help here.

You are of course right. In case of passing some "artificial" data point to TChart.IsPointInViewPort(), widening CurrentExtent may not help. But, from the other side, adding / subtracting epsilons to the range during the comparison may also not help!

So are we beaten? No, because we know, what the IsPointInViewPort() is for - we can accept some false positives. IsPointInViewPort() should return True for all the points in range, and may also return True for some points out of the range - it doesn't hurt us. It could be even implemented as:

  function TChart.IsPointInViewPort(const AP: TDoublePoint): Boolean;
  begin
    Result := True;
  end;

This would cause much longer time of painting, but everything would be still working correctly. The only thing that is not acceptable are false negatives - when some point is within the range, but False is returned.




Now, maybe, the most surprising observation - what is the difference between: (please note presence / absence of EPSILONs)

A)
  function TChart.IsPointInViewPort(const AP: TDoublePoint): Boolean;
  begin
    Result :=
      not IsNan(AP) and
      InRange(AP.X, FCurrentExtent.a.X - EPSILON_1, FCurrentExtent.b.X + EPSILON_2) and InRange(AP.Y, FCurrentExtent.a.Y - EPSILON_3, FCurrentExtent.b.Y + EPSILON_4);
  end;

B)
  FCurrentExtent.a.X := FCurrentExtent.a.X - EPSILON_1;
  FCurrentExtent.b.X := FCurrentExtent.b.X + EPSILON_2;
  FCurrentExtent.a.Y := FCurrentExtent.a.Y - EPSILON_3;
  FCurrentExtent.b.Y := FCurrentExtent.b.Y + EPSILON_4;

and then calling:

  function TChart.IsPointInViewPort(const AP: TDoublePoint): Boolean;
  begin
    Result :=
      not IsNan(AP) and
      InRange(AP.X, FCurrentExtent.a.X, FCurrentExtent.b.X) and InRange(AP.Y, FCurrentExtent.a.Y, FCurrentExtent.b.Y);
  end;


Well, there is no difference! This is quite helpful, because - instead of adding / subtracting EPSILONs in every place, where FCurrentExtent is used, EPSILONs can be "embedded" into FCurrentExtent!



So the last problem is: what values should be assigned to EPSILONs? The answer is: such values, that will guarantee, that every point within the range will be recognized to be in range. And, additionally, EPSILONs should be as small as possible, to avoid too many false positives.

The first solution is: to obtain EPSILONs, perform a full error analysis (this includes analyzing the calculation algorithm and analyzing ranges of series' data points - since too small EPSILONs will be ignored, and too large will introduce a lot of false positives).

A second solution is: EPSILONs can be obtained iteratively, as first values that guarantee, that all data points within the range will be recognized as being in range. We have an additional bonus here: we know, that our range may be too tight only because of finite floating point representation, so our error is located at the least significant digits. So one or two loop iterations, that correct least significant digits, will be enough. And no algorithm knowledge and data analysis is needed, as in the first solution.


The second solution is, in fact, what is done in a loop, that corrects FScale. FScale is corrected by a minimum possible amounts, i.e. multiplied by 0.999999999999999 - so, as a consequence, also FCurrentExtent is enlarged by minimum possible (for the given range of data points) amounts. Additional bonus: FScale, FOffset and FCurrentExtent are all coherent now. It's rather not surprising, that, in this case, FCurrentExtent just "embeds" EPSILONs, so no separate EPSILON values are needed.

Some additional note: 0.999999999999999 uses 15 significant digits, but Double can hold even 16 significant digits for some numbers (Double has in fact binary internal representation, so it can be transformed either to 15, or 16 decimal digits, depending on the value). But using a multiplier having 16 decimal digits, i.e. one more "9": 0.9999999999999999, may fail - similarly as above, where 56.78 - 0.0001234 = 56.78, we could get here: SomeValue * 0.9999999999999999 = SomeValue (0.9999999999999999 would be, in some cases, treated as 1, so multiplying would change nothing). So the multiplier should have "only" 15 significant digits.




> And it must be considered that some other code may break your recalibration. When this happens the old bug will be back.

Yes, of course. Touching FCurrentExtent may make the points disappear - this is regardless of using EPSILONs "embedded" into FCurrentExtent, or separately in TChart.IsPointInViewPort(). Same change applied to FCurrentExtent's contents will result in same effects in both cases.




> bug will remain in the library like a time bomb and may come back for some reason.

Either when EPSILONs are "embedded" into FCurrentExtent, or when they are used separately - in both cases this is the same time bomb. And also a requirement of using separate EPSILONs is a time bomb itself - either earlier, or later, someone may forget to calculate and use them :(


Regards

Marcin Wiazowski

2019-02-01 21:33

reporter   ~0113785

P.S. I successfully reopened the report, but I have no option to change the status to "feedback", as you asked. It seems that only you, as an owner of this report, have a possibility of setting the "feedback" status.

wp

2019-02-02 15:08

developer   ~0113794

Sorry to confuse you with the "feedback", just re-opening the report is enough.

I will not anwer to the other topics in order to avoid an endless discussion about the 15th decimal. Your arguments did not change my position, and I will not apply the patch.

Marcin Wiazowski

2019-02-02 17:33

reporter   ~0113800

Ok. Thanks for your time!

Regards


P.S. 15th decimal really matters! It's so-called "butterfly effect": "In 1961, Lorenz was running a numerical computer model to redo a weather prediction from the middle of the previous run as a shortcut. He entered the initial condition 0.506 from the printout instead of entering the full precision 0.506127 value. The result was a completely different weather scenario."

Issue History

Date Modified Username Field Change
2019-01-28 03:03 Marcin Wiazowski New Issue
2019-01-28 03:03 Marcin Wiazowski File Added: Reproduce.zip
2019-01-28 03:04 Marcin Wiazowski File Added: patch.diff
2019-01-28 03:04 Marcin Wiazowski File Added: Animation.gif
2019-01-28 09:46 wp Assigned To => wp
2019-01-28 09:46 wp Status new => assigned
2019-01-28 10:19 wp Note Added: 0113690
2019-01-28 10:20 wp Fixed in Revision => 60235
2019-01-28 10:20 wp LazTarget => 2.0
2019-01-28 10:20 wp Status assigned => resolved
2019-01-28 10:20 wp Resolution open => fixed
2019-01-28 10:20 wp Target Version => 2.0
2019-01-28 17:14 Marcin Wiazowski Note Added: 0113698
2019-01-28 17:55 wp Note Added: 0113699
2019-01-28 17:57 wp Status resolved => assigned
2019-01-28 17:57 wp Resolution fixed => reopened
2019-01-28 17:57 wp Status assigned => feedback
2019-01-28 17:58 wp Note Edited: 0113699 View Revisions
2019-01-29 21:56 Marcin Wiazowski Note Added: 0113723
2019-01-29 21:56 Marcin Wiazowski Status feedback => assigned
2019-02-01 12:50 wp Note Added: 0113772
2019-02-01 12:51 wp Status assigned => resolved
2019-02-01 12:51 wp Resolution reopened => fixed
2019-02-01 21:28 Marcin Wiazowski Note Added: 0113784
2019-02-01 21:28 Marcin Wiazowski Status resolved => assigned
2019-02-01 21:28 Marcin Wiazowski Resolution fixed => reopened
2019-02-01 21:33 Marcin Wiazowski Note Added: 0113785
2019-02-02 15:08 wp Fixed in Revision 60235 => 60235, 60276
2019-02-02 15:08 wp Note Added: 0113794
2019-02-02 15:08 wp Status assigned => resolved
2019-02-02 15:08 wp Resolution reopened => fixed
2019-02-02 17:33 Marcin Wiazowski Note Added: 0113800
2019-02-02 17:33 Marcin Wiazowski Status resolved => closed