1. 11 Oct, 2019 2 commits
  2. 01 Apr, 2017 1 commit
  3. 26 Feb, 2017 1 commit
  4. 25 Feb, 2017 5 commits
  5. 24 Feb, 2017 1 commit
  6. 29 Jun, 2016 1 commit
  7. 06 Oct, 2015 1 commit
  8. 03 Oct, 2015 1 commit
  9. 11 Aug, 2015 1 commit
  10. 25 Mar, 2015 3 commits
  11. 05 Mar, 2015 1 commit
  12. 25 Sep, 2014 1 commit
  13. 20 Sep, 2014 1 commit
  14. 31 Mar, 2014 1 commit
  15. 29 Mar, 2014 1 commit
  16. 12 Jan, 2014 3 commits
  17. 09 Dec, 2013 1 commit
    • Siarhei Siamashka's avatar
      mali: detect and workaround mismatch between back and front buffers · eed17d55
      Siarhei Siamashka authored
      After window creation or resize, the mali blob on the client side
      requests two dri2 buffers (for back and front) from the ddx. The
      problem is that the 'swap' and 'get_buffer' operations are executed
      out of order relative to each other and we may have different
      possible patterns of dri2 communication:
      
      1. swap swap swap swap get_buffer swap get_buffer swap swap ...
      2. swap swap swap get_buffer swap swap get_buffer swap swap ...
      
      A major annoyance is that both mali blob on the client side and
      the ddx driver in xserver need have the same idea about which one
      of there two buffers goes to front and which goes to back. Older
      commit https://github.com/ssvb/xf86-video-fbturbo/commit/30b4ca27d1c4
      
      
      tried to address this problem in a mostly empirical way and managed
      to solve it at least for the synthetic test gles-rgb-cycle-demo and
      for most of the real programs (such as Qt5 applications, etc.)
      
      However appears that this heuristics is not 100% reliable in all
      cases. The Extreme Tux Racer game run in glshim manages to trigger
      the back and front buffers mismatch. Which manifests itself as
      erratic penguin movement.
      
      This patch adds a special check, which now randomly samples certain
      bytes from the dri2 buffers to see which one of them has been
      modified by the client application between buffer swaps. If we see
      that the rendering actually happens to the front buffer instead of
      the back buffer, then we just change the roles of these buffers.
      
      Signed-off-by: default avatarSiarhei Siamashka <siarhei.siamashka@gmail.com>
      eed17d55
  18. 15 Nov, 2013 1 commit
  19. 26 Oct, 2013 1 commit
  20. 19 Oct, 2013 3 commits
  21. 17 Oct, 2013 1 commit
  22. 16 Oct, 2013 1 commit
  23. 08 Oct, 2013 1 commit
    • Siarhei Siamashka's avatar
      RPi: implement threshold for deciding between CPU and DMA blits · 102957f9
      Siarhei Siamashka authored
      
      
      Benchmarking with x11perf, modified to support wider range of sizes
      for the scroll operation. Tests have been run at the stock 700MHz CPU
      clock frequency and with 1280x720 32bpp desktop.
      
      $ DISPLAY=:0 ./x11perf -scroll5 -scroll10 -scroll15 -scroll20 \
                             -scroll30 -scroll50 -scroll100
      
      == CPU ==
      
      1000000 trep @   0.0289 msec ( 34600.0/sec): Scroll 5x5 pixels
      1000000 trep @   0.0387 msec ( 25800.0/sec): Scroll 10x10 pixels
      1000000 trep @   0.0459 msec ( 21800.0/sec): Scroll 15x15 pixels
       450000 trep @   0.0576 msec ( 17300.0/sec): Scroll 20x20 pixels
       350000 trep @   0.0817 msec ( 12200.0/sec): Scroll 30x30 pixels
       200000 trep @   0.1564 msec (  6390.0/sec): Scroll 50x50 pixels
       100000 trep @   0.4446 msec (  2250.0/sec): Scroll 100x100 pixels
      
      == fb_copyarea (DMA) acceleration ==
      
      1000000 trep @   0.0307 msec ( 32500.0/sec): Scroll 5x5 pixels
      1000000 trep @   0.0353 msec ( 28300.0/sec): Scroll 10x10 pixels
      1000000 trep @   0.0397 msec ( 25200.0/sec): Scroll 15x15 pixels
      1000000 trep @   0.0464 msec ( 21600.0/sec): Scroll 20x20 pixels
       400000 trep @   0.0645 msec ( 15500.0/sec): Scroll 30x30 pixels
       250000 trep @   0.1177 msec (  8500.0/sec): Scroll 50x50 pixels
       100000 trep @   0.2783 msec (  3590.0/sec): Scroll 100x100 pixels
      
      This shows that the ioctls overhead and the DMA setup cost is not so
      significant for the Raspberry Pi. DMA already becomes a bit faster
      than CPU at 10x10 size of the blit operation.
      
      Even though there is no significant difference between CPU and DMA
      for extremely small sizes of operations (the other overhead is clearly
      dominating), setting a threshold is not going to harm:
      
      == mixed CPU / fb_copyarea (DMA) with 90 pixels threshold ==
      
      1000000 trep @   0.0291 msec ( 34300.0/sec): Scroll 5x5 pixels
      1000000 trep @   0.0345 msec ( 29000.0/sec): Scroll 10x10 pixels
      1000000 trep @   0.0395 msec ( 25300.0/sec): Scroll 15x15 pixels
      1000000 trep @   0.0466 msec ( 21400.0/sec): Scroll 20x20 pixels
       400000 trep @   0.0650 msec ( 15400.0/sec): Scroll 30x30 pixels
       250000 trep @   0.1181 msec (  8470.0/sec): Scroll 50x50 pixels
       100000 trep @   0.2784 msec (  3590.0/sec): Scroll 100x100 pixels
      
      If some other ARM devices also implement Raspberry Pi compatible
      accelerated fb_copyarea ioctl, then the threshold selection may
      be reconsidered.
      
      Signed-off-by: default avatarSiarhei Siamashka <siarhei.siamashka@gmail.com>
      102957f9
  24. 07 Oct, 2013 1 commit
  25. 03 Oct, 2013 2 commits
  26. 22 Sep, 2013 1 commit
  27. 09 Sep, 2013 2 commits