{"id":827,"date":"2016-09-30T16:35:16","date_gmt":"2016-10-01T00:35:16","guid":{"rendered":"http:\/\/www.ludicon.com\/castano\/blog\/?p=827"},"modified":"2022-08-23T14:43:30","modified_gmt":"2022-08-23T22:43:30","slug":"lightmap-compression-in-the-witness","status":"publish","type":"post","link":"http:\/\/www.ludicon.com\/castano\/blog\/2016\/09\/lightmap-compression-in-the-witness\/","title":{"rendered":"Lightmap Compression in The Witness"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2016\/09\/screenshot06-512x288.png\" alt=\"screenshot06\" width=\"512\" height=\"288\" class=\"aligncenter size-large wp-image-944\" srcset=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2016\/09\/screenshot06-512x288.png 512w, http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2016\/09\/screenshot06-267x150.png 267w, http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2016\/09\/screenshot06-768x432.png 768w, http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2016\/09\/screenshot06-800x450.png 800w, http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2016\/09\/screenshot06.png 1920w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><br \/>\nIn my initial implementation of our lightmapping technology I simply stored lightmap textures in RGBA16F format. This produced excellent results, but at a very high memory cost. I later switched to the R10G10B10A2 fixed point format to reduce the memory footprint of our lightmaps, but that introduced some quantization artifacts. At first glance it seemed that we would need more than 10 bits per component in order to have smooth gradients!<\/p>\n<p>At the time the RGBM color transform seemed to be a popular way to encode lightmaps. I gave that a try and the results weren&#8217;t perfect, but it was a clear improvement and I could already think of several ways of improving the encoder. Over time I tested some of these ideas and managed to improve the quality significantly and also reduce the size of the lightmap data. In this post I&#8217;ll describe some of these ideas and support them with examples showing my results.<\/p>\n<p><!--more--><\/p>\n<p><a href=\"http:\/\/game.watch.impress.co.jp\/docs\/20070131\/3dlp113.htm\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/capcom.png\" alt=\"capcom\" width=\"300\" height=\"208\" class=\"alignright size-full wp-image-883\" srcset=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/capcom.png 300w, http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/capcom-267x185.png 267w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a>I believe the RGBM transform was first proposed by Capcom <a href=\"http:\/\/game.watch.impress.co.jp\/docs\/20070131\/3dlp113.htm\">in these CEDEC 2006 slides<\/a>. While Capcom employs it for diffuse textures, it has become a popular way to encode lightmaps. RGBM or some of its variations are used in <a href=\"https:\/\/gist.github.com\/aras-p\/1199797\">Unity<\/a>, <a href=\"http:\/\/solid-angle.blogspot.com\/2014\/03\/bioshock-infinite-lighting.html\">Bioshock Infinite<\/a>, and the Unreal Engine, among others. Its use for standard color textures is not as widespread, but <a href=\"https:\/\/shaneycg.github.io\/ducktales-remastered-texture-compression\/\">Shane Calimlim found it to be a good fit for the stylized artwork of Duck Tales<\/a> and suggests it could be a good format in general. However, with so many precedents, I was surprised it had not been analyzed in more detail.<\/p>\n<p>The main challenge of compressing lightmaps is that often they have a wider range than regular diffuse textures. This range is not as large as in typical HDR textures, but it&#8217;s large enough that using regular LDR formats results in obvious quantization artifacts. Lightmaps don&#8217;t usually have high frequency details, they are often close to greyscale, and only have smooth variations in the chrominance.<\/p>\n<p>In our case, most our lightmap values are within the [0, 16] range, and in the rare occasions when they are outside of that range, we constrain them clamping the colors while preserving the hue to avoid saturation artifacts. Brian Karis also <a href=\"http:\/\/graphicrants.blogspot.com\/2013\/12\/tone-mapping.html\">suggests tone mapping the upper section of the range<\/a> to avoid sharp discontinuities, but I only found this to be a problem when light sources had unreasonably high intensity values.<\/p>\n<p>The shape of the lightmap color distribution varies considerably. Interior lightmaps are predominantly dark with a long tail of brighter highlights:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/tunnel.histogram.png\" alt=\"tunnel.histogram\" width=\"512\" height=\"128\" border=\"1\" class=\"aligncenter size-full wp-image-844\" \/><\/p>\n<p>while outdoor lightmaps have a more Gaussian distribution with a bell-like shape. This particular lightmap is under the shade of some colored fall trees, which give it an orange tone:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/hut.histogram.png\" alt=\"hut.histogram\" width=\"512\" height=\"128\" border=\"1\" class=\"aligncenter size-full wp-image-845\" \/><\/p>\n<p>Not all lightmaps use all the available range, so after tone mapping the next thing we do is to scale the range to [0, 1].<\/p>\n<p>So, why is RGBM a good choice for data like this? The distribution of distinct values that can be represented with RGBM looks as follows: <\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/rgbm.histogram.png\" alt=\"rgbm.histogram\" width=\"512\" height=\"128\" border=\"1\" class=\"aligncenter size-full wp-image-856\" \/><\/p>\n<p>It provides much more precision toward 0 than toward 1. This is beneficial for images that are intended to be visualized at multiple exposures. We want to obtain smooth lightmaps without quantization artifacts independently of the camera exposure. However, as we will see later, this provides much more precision around 0 than is actually necessary.<\/p>\n<p><!--\nThe tone mapper transforms the colors based on the camera exposure, which in The Witness only has 4 stops. What this means is that in order to have smooth lightmaps on all lighting conditions, we need 16 times more precision in dark colors than on the highlights.\n--><\/p>\n<p><!--\n\n@@ Add tests comparing RGBM against packed floating point representations.\n\nThis is something that floating point numbers accomplish automatically, but the range that they provide is much larger than we need. In our case we only need ~4 bits of exponent. \n\n- floating point numbers are roughly logarithmically spaced.\n\n- Strange things that I do not understand:\n   - rgbm8 appears to provide more precision than half floating points in parts of the [0,1] range.\n\n\nDXGI_FORMAT_R11G11B10_FLOAT provides 5 bits of exponent per component and therefore seems like a good candidate for our tests.\n\nDXGI_FORMAT_R9G9B9E5_SHAREDEXP too.\n\nBC6\n\nhow to measure errors? The compressors minimize plain RMSE, but \n\n- lightmaps are displayed on screen tone mapped. \n- camera exposure in The Witness varies from 0.22 to 2.2\n- tone mapping compresses the highlights.\n\nIf we minimize RMSE we overemphasize errors in the highlights, which often end up compressed after tone mapping. Despite of this I'm primarily using RMSE and \n\n--><\/p>\n<h2>Naive RGBM Encoding<\/h2>\n<p>In my initial implementation I simply used RGBA8 textures, squaring the colors to perform gamma correction in the shader. The standard <code>rgb -> RGBM<\/code> transform is as follows:<\/p>\n<pre>\r\nm = max(r,g,b)\r\nR = r\/m\r\nG = g\/m\r\nB = b\/m\r\nM = m\r\n<\/pre>\n<p>A simple improvement <a href=\"http:\/\/the-witness.net\/news\/2011\/09\/a-pleasant-lightmapping-update\/\">I did early on<\/a> is to divide the quantization interval in two. This is a variation of the idea presented in <a href=\"http:\/\/www.ppsloan.org\/publications\/HDRComp.pdf\">Microsoft&#8217;s LUVW HDR texture paper<\/a>, but instead of using an extra texture, I simply rely on the RGB and alpha (M) channels.<\/p>\n<p><!-- @@ Show equivalence between RGBM and LUVW --><\/p>\n<p>A similar observation is done by <a href=\"https:\/\/shaneycg.github.io\/ducktales-remastered-texture-compression\/\">Shane Calimlim<\/a>:<\/p>\n<blockquote><p>\nGray is encoded as pure white in the color map, which may not always be optimal. Gray is an edge case most of the time, but a smarter encoding algorithm could make vast improvements in its handling. In the simple version of the algorithm the entire burden of representing gray lies with the multiply map; this could be split between both maps, improving precision greatly in scenarios where the color map can accommodate extra data without loss.\n<\/p><\/blockquote>\n<p>But in our case grey is not really an edge case! Lightmaps are mostly grey with slight smooth color variations.<\/p>\n<p>The way I implemented this is by choosing a certain threshold <code>t<\/code>. For values of <code>m<\/code> that are lower than <code>t<\/code> the color is fully encoded using only the RGB components as follows:<\/p>\n<pre>\r\nR = r\/t\r\nG = g\/t\r\nB = b\/t\r\nM = 0\r\n<\/pre>\n<p>and for values of <code>m<\/code> greater than the threshold <code>t<\/code>, the normalized color is encoded in the RGB components, and the normalization factor <code>m<\/code> is biased and scaled to store it at a higher precision:<\/p>\n<pre>\r\nR = r\/m\r\nG = g\/m\r\nB = b\/m\r\nM = (m-t) \/ (1-t)\r\n<\/pre>\n<p>That&#8217;s equivalent to just doing:<\/p>\n<pre>\r\nm = max(r,g,b,t)\r\nR = r\/m\r\nG = g\/m\r\nB = b\/m\r\nM = (m-t) \/ (1-t)\r\n<\/pre>\n<p>This is useful for several reasons. As Shane notes, splitting the burden of representing the luminance between the RGB and M maps we can obtain more precision and reduce the size of the quantization interval.<\/p>\n<p>It&#8217;s important to note that this actually reduces precision around zero, where we don&#8217;t actually need so much, because the game camera never has long enough exposures. If we look at the distribution of grey levels that biased RGBM can represent it now looks as follows:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/rgbmt.histogram.png\" alt=\"rgbmt.histogram\" width=\"512\" height=\"128\" border=\"1\" class=\"aligncenter size-full wp-image-862\" srcset=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/rgbmt.histogram.png 512w, http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/rgbmt.histogram-267x66.png 267w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/p>\n<p>Picking different values of <code>t<\/code> allows us to use different quantization intervals for different parts of the color range. The optimal choice of <code>t<\/code> depends on the distribution of colors in the lightmap and the number of bits used to represent each of the components. We chose this value experimentally. For our lightmaps values around 0.3 seemed to work best when encoding them in RGBA8 format.<\/p>\n<h2>Optimized RGBM Encoding<\/h2>\n<p>With these improvements RGBM was already producing very good results. Visually I could not see any difference between the RGBM lightmaps and the raw half floating point lightmaps. However, I had not reduced the size of the lightmaps by much and ideally I wanted to compress them further.<\/p>\n<p>The next thing that I tried to do was to choose <code>M<\/code> in a way that minimizes the quantization error. I did that by brute force, trying all possible values of <code>M<\/code>, computing the corresponding <code>RGB<\/code> values for that choice of <code>M<\/code>, and selecting the one that minimized the MSE:<\/p>\n<pre>\r\nfor (int m = 0; m < 256; m++) {\r\n    \/\/ Decode M\r\n    float M = float(m) \/ 255.0f * (1 - threshold) + threshold;\r\n\r\n    \/\/ Encode RGB.\r\n    int R = ftoi_round(255.0f * saturate(r\/ M));\r\n    int G = ftoi_round(255.0f * saturate(g \/ M));\r\n    int B = ftoi_round(255.0f * saturate(b \/ M));\r\n\r\n    \/\/ Decode RGB.\r\n    float dr = (float(R) \/ 255.0f) * M;\r\n    float dg = (float(G) \/ 255.0f) * M;\r\n    float db = (float(B) \/ 255.0f) * M;\r\n\r\n    \/\/ Measure error.\r\n    float error = square(r-dr) + square(g-dg) + square(b-db);\r\n\r\n    if (error < bestError) {\r\n        bestError = error;\r\n        bestM = M;\r\n    }\r\n}\r\n<\/pre>\n<p>This improved the error substantially, but it introduced interpolation artifacts. The RGBM encoding is not linear, so interpolation of RGBM colors is not correct. With the naive method this was not a big deal, because adjacent texels usually had similar values of <code>M<\/code>, but the <code>M<\/code> values resulting from this optimization procedure were not necessarily similar anymore.<\/p>\n<p><!-- @@ A better analysis of interpolation errors. --><\/p>\n<p>However, it was easy to solve this problem by constraining the search to a small range around the <code>M<\/code> value selected with the naive method:<\/p>\n<pre>\r\nfloat M = max(max(R, G), max(B, threshold));\r\nint iM = ftoi_ceil((M - threshold) \/ (1 - threshold) * 255.0f);\r\n\r\nfor (int m = max(iM-16, 0); m < min(iM+16, 256); m++) {\r\n    ...\r\n}\r\n<\/pre>\n<p>This constrain did not reduce the quality noticeably, but eliminated the interpolation artifacts entirely.<\/p>\n<p>While this idea showed that there's a significant optimization potential over the naive approach, it did not get us any closer to our stated goal: to reduce the size of the lightmaps. I tried to use a packed pixel format such as RGBA4, but even with the optimized encoding, it did not produce sufficiently high quality results. To reduce the size further we would have to use DXT block compression.<\/p>\n<h2>RGBM-DXT5<\/h2>\n<p>Simply compressing the RGBM data produced poor results and compressing the optimized RGBM data did not help, but instead only degraded the results even more.<\/p>\n<p>A brute force compressor is not practical in this case, because when processing blocks of 4x4 colors simultaneously the search space is much larger.<\/p>\n<p>A better approach is to first compress the <code>RGB<\/code> values obtained through the naive procedure using a standard DXT1 compressor and then choosing the <code>M<\/code> values to compensate for the quantization and compression errors of the DXT1 component.<\/p>\n<p>That is, we want to compute <code>M<\/code> so that:<\/p>\n<pre>\r\nM * (R, G, B) == (r, g, b)\r\n<\/pre>\n<p>This gives us three equations that we can minimize in the least squares sense. The <code>M<\/code> that minimizes the error is:<\/p>\n<pre>\r\nM = dot(rgb, RGB) \/ dot(RGB, RGB)\r\n<\/pre>\n<p>In my tests, the resulting <code>M<\/code>'s compress very well in the alpha map and reduced the error significantly.<\/p>\n<p>I also tried to encode <code>RGB<\/code> again with the newly obtained <code>M<\/code>, and compress them afterward, but in most cases that did not improve the error. Something that worked well was to simply weight the <code>RGB<\/code> error by <code>M<\/code> in the initial compression step.<\/p>\n<p>The number of bits allocated for the <code>RGB<\/code> and <code>M<\/code> components is very different than in our initial RGBA8 texture, so the choice of <code>t<\/code> had to be reviewed. In this case values of <code>t<\/code> around 0.15 produced best results. I attribute this to the reduced number of bits per pixel used to encode the <code>RGB<\/code> channels.<\/p>\n<h2>Results<\/h2>\n<p>In addition to the described formats I also compared the proposed method against BC6. BC6 is specifically designed to encode HDR textures, but it's not available in all hardware. Our optimized RGBM-DXT5 scheme provides nearly the same quality as BC6:<\/p>\n<figure id=\"attachment_832\" aria-describedby=\"caption-attachment-832\" style=\"width: 594px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/rmse.png\" alt=\"rmse\" width=\"594\" height=\"370\" class=\"aligncenter size-full wp-image-832\" srcset=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/rmse.png 594w, http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/rmse-267x166.png 267w, http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/rmse-512x318.png 512w\" sizes=\"auto, (max-width: 594px) 100vw, 594px\" \/><figcaption id=\"caption-attachment-832\" class=\"wp-caption-text\">RMSE<\/figcaption><\/figure>\n<p>The above chart is displaying RMSE values of the final images after color space conversion and range rescaling.<\/p>\n<p>To study the effectiveness of the encoders it's more useful to look at the errors before rescaling. These look a lot more uniform, but cannot be compared against BC6 anymore, since in that case adjusting the range of the input values does not usually reduce the compression error.<\/p>\n<figure id=\"attachment_833\" aria-describedby=\"caption-attachment-833\" style=\"width: 613px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/normalized-rmse.png\" alt=\"normalized-rmse\" width=\"613\" height=\"325\" class=\"aligncenter size-full wp-image-833\" srcset=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/normalized-rmse.png 613w, http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/normalized-rmse-267x141.png 267w, http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/normalized-rmse-512x271.png 512w\" sizes=\"auto, (max-width: 613px) 100vw, 613px\" \/><figcaption id=\"caption-attachment-833\" class=\"wp-caption-text\">Normalized RMSE<\/figcaption><\/figure>\n<p>Finally, I thought it would be interesting to use RGBM-DXT5 to compress standard images and compare it against YCoCg-DXT5. The following chart shows the results for the first 8 images of the kodim image set:<\/p>\n<figure id=\"attachment_834\" aria-describedby=\"caption-attachment-834\" style=\"width: 554px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/RGBM-YCoCg.png\" alt=\"RGBM vs YCoCg\" width=\"554\" height=\"298\" class=\"size-full wp-image-834\" srcset=\"http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/RGBM-YCoCg.png 554w, http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/RGBM-YCoCg-267x143.png 267w, http:\/\/www.ludicon.com\/castano\/blog\/wp-content\/uploads\/2014\/12\/RGBM-YCoCg-512x275.png 512w\" sizes=\"auto, (max-width: 554px) 100vw, 554px\" \/><figcaption id=\"caption-attachment-834\" class=\"wp-caption-text\">RGBM vs YCoCg<\/figcaption><\/figure>\n<p>YCoCg-DXT5 is clearly a much better choice for LDR color textures.<\/p>\n<h2>Conclusions and Future Work<\/h2>\n<p>Our proposed RGBM encoder was good enough for our lightmaps, but I'm convinced there's more room for improvement.<\/p>\n<p>One idea would be to pick a different threshold <code>t<\/code> for each texture. Finding the best <code>t<\/code> for a given texture to be encoded using the plain RGBM linear format would be easy, but it's not so obvious when using block compression.<\/p>\n<p>The RGB components are encoded with a standard weighted DXT1 compressor. It would be interesting to use a specialized compressor that favored <code>RGB<\/code> values with errors that the <code>M<\/code> component could correct. For example, the <code>M<\/code> values resulting from the least squares minimization are sometimes above 1, but need to be clamped to the <code>[0, 1]<\/code> range, it should be possible to constrain the <code>RGB<\/code> endpoints to prevent that. It may also be possible to choose <code>RGB<\/code> endpoints such that the error of the least squares fitted <code>M<\/code> are as small as possible.<\/p>\n<p>Finally, DXT5 is not available on most mobile GPUs. I haven't tried this yet, but it seems the ETC2 EAC_RGBA8 format is widely available and would be a good fit for the techniques presented here. It would also be interesting to compare our method against packed floating point formats such as (R11G11B10_FLOAT<br \/>\nR9G9B9E5_SHAREDEXP) and ASTC's HDR mode.<\/p>\n<h2>Tables<\/h2>\n<p>In all cases I measured the error using the RMSE metric, which is the same metric used to guide the block compressors. It may make more sense to use a metric that takes into account how the lightmaps are visualized in the game. I did exactly that, tone map the lightmaps at different exposures and compute the error in post-tone-mapping space. The tables below show the resulting values and they roughly correlate with the plain RMSE metric.<\/p>\n<pre>\r\n               Tone mapped error\r\n            e=2.2   e=1.0   e=0.22     average   rmse\r\n\r\nRGBM8 naive:\r\n\r\nhallway:    0.00026 0.00045 0.00089 -> 0.00053 | 0.00007\r\nhut:        0.00100 0.00102 0.00082 -> 0.00095 | 0.00609\r\narchway:    0.00114 0.00141 0.00190 -> 0.00148 | 0.00818\r\nwindmill:   0.00102 0.00133 0.00185 -> 0.00140 | 0.00083\r\nshaft:      0.00201 0.00228 0.00214 -> 0.00214 | 0.00798\r\nhub:        0.00151 0.00182 0.00191 -> 0.00175 | 0.00267\r\ntower:      0.00153 0.00200 0.00299 -> 0.00217 | 0.00160\r\ntunnel:     0.00094 0.00123 0.00171 -> 0.00129 | 0.00093\r\nmine:       0.00105 0.00120 0.00141 -> 0.00122 | 0.00640\r\ntheater:    0.00099 0.00126 0.00160 -> 0.00128 | 0.00129\r\n\r\nRGBM8 optimized:\r\n\r\nhallway     0.00010 0.00015 0.00030 -> 0.00018 | 0.00004\r\nhut         0.00049 0.00043 0.00031 -> 0.00041 | 0.00543\r\narchway     0.00044 0.00060 0.00122 -> 0.00075 | 0.00595\r\nwindmill    0.00020 0.00026 0.00036 -> 0.00027 | 0.00024\r\nshaft       0.00059 0.00066 0.00102 -> 0.00076 | 0.00501\r\nhub         0.00038 0.00051 0.00085 -> 0.00058 | 0.00099\r\ntower       0.00060 0.00072 0.00082 -> 0.00072 | 0.00112\r\ntunnel      0.00025 0.00031 0.00042 -> 0.00033 | 0.00048\r\nmine:       0.00044 0.00049 0.00083 -> 0.00058 | 0.00467\r\ntheater:    0.00061 0.00076 0.00087 -> 0.00075 | 0.00095\r\n\r\nRGBM4 optimized:\r\n\r\nhallway:    0.00169 0.00259 0.00562 -> 0.00330 | 0.00063\r\nhut:        0.00932 0.00899 0.00773 -> 0.00868 | 0.08317\r\narchway:    0.00906 0.01287 0.02616 -> 0.01603 | 0.09614\r\nwindmill:   0.00424 0.00562 0.00830 -> 0.00606 | 0.00402\r\nshaft:      0.01103 0.01314 0.01978 -> 0.01465 | 0.08204\r\nhub:        0.00868 0.01160 0.01848 -> 0.01292 | 0.01722\r\ntower:      0.01004 0.01217 0.01466 -> 0.01229 | 0.01835\r\ntunnel:     0.00516 0.00687 0.01066 -> 0.00757 | 0.00764\r\nmine:       0.00871 0.01044 0.01742 -> 0.01219 | 0.07510\r\ntheater:    0.00683 0.00840 0.00963 -> 0.00829 | 0.01057\r\n\r\nDXT5 naive:\r\n\r\nhallway:    0.00155 0.00249 0.00570 -> 0.00325 | 0.00048\r\nhut:        0.00487 0.00536 0.00564 -> 0.00529 | 0.02119\r\narchway:    0.00500 0.00656 0.01039 -> 0.00731 | 0.01949\r\nwindmill:   0.00214 0.00287 0.00444 -> 0.00315 | 0.00177\r\nshaft:      0.01062 0.01339 0.01977 -> 0.01459 | 0.03412\r\nhub:        0.00616 0.00796 0.01130 -> 0.00848 | 0.01481\r\ntower:      0.00551 0.00712 0.01019 -> 0.00761 | 0.00735\r\ntunnel:     0.00235 0.00308 0.00451 -> 0.00331 | 0.00285\r\nmine:       0.00471 0.00589 0.00877 -> 0.00646 | 0.01809\r\ntheater:    0.00332 0.00412 0.00496 -> 0.00413 | 0.00498\r\n\r\nDXT5 optimized:\r\n\r\nhallway:    0.00125 0.00199 0.00456 -> 0.00260 | 0.00041\r\nhut:        0.00336 0.00373 0.00408 -> 0.00372 | 0.01529\r\narchway:    0.00353 0.00460 0.00719 -> 0.00511 | 0.01285\r\nwindmill:   0.00134 0.00180 0.00280 -> 0.00198 | 0.00116\r\nshaft:      0.00801 0.01016 0.01507 -> 0.01108 | 0.02437\r\nhub:        0.00469 0.00602 0.00846 -> 0.00639 | 0.01241\r\ntower:      0.00421 0.00544 0.00781 -> 0.00582 | 0.00599\r\ntunnel:     0.00157 0.00206 0.00306 -> 0.00223 | 0.00193\r\nmine:       0.00338 0.00428 0.00646 -> 0.00471 | 0.01178\r\ntheater:    0.00245 0.00302 0.00357 -> 0.00301 | 0.00382\r\n\r\nDXT5 optimized with M-weighted RGB:\r\n\r\nhallway:    0.00114 0.00184 0.00430 -> 0.00243 | 0.00038\r\nhut:        0.00338 0.00382 0.00443 -> 0.00388 | 0.01478\r\narchway:    0.00356 0.00464 0.00725 -> 0.00515 | 0.01271\r\nwindmill:   0.00134 0.00180 0.00281 -> 0.00198 | 0.00113\r\nshaft:      0.00804 0.01023 0.01522 -> 0.01116 | 0.02382\r\nhub:        0.00472 0.00611 0.00868 -> 0.00650 | 0.01088\r\ntower:      0.00421 0.00544 0.00787 -> 0.00584 | 0.00597\r\ntunnel:     0.00157 0.00206 0.00306 -> 0.00223 | 0.00193\r\nmine:       0.00337 0.00428 0.00648 -> 0.00471 | 0.01170\r\ntheater:    0.00245 0.00302 0.00356 -> 0.00301 | 0.00382\r\n<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>In my initial implementation of our lightmapping technology I simply stored lightmap textures in RGBA16F format. This produced excellent results, but at a very high memory cost. I later switched to the R10G10B10A2 fixed point format to reduce the memory footprint of our lightmaps, but that introduced some quantization artifacts. At first glance it seemed&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[20],"class_list":["post-827","post","type-post","status-publish","format-standard","hentry","category-coding","tag-the-witness"],"_links":{"self":[{"href":"http:\/\/www.ludicon.com\/castano\/blog\/wp-json\/wp\/v2\/posts\/827","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.ludicon.com\/castano\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.ludicon.com\/castano\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.ludicon.com\/castano\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.ludicon.com\/castano\/blog\/wp-json\/wp\/v2\/comments?post=827"}],"version-history":[{"count":76,"href":"http:\/\/www.ludicon.com\/castano\/blog\/wp-json\/wp\/v2\/posts\/827\/revisions"}],"predecessor-version":[{"id":953,"href":"http:\/\/www.ludicon.com\/castano\/blog\/wp-json\/wp\/v2\/posts\/827\/revisions\/953"}],"wp:attachment":[{"href":"http:\/\/www.ludicon.com\/castano\/blog\/wp-json\/wp\/v2\/media?parent=827"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.ludicon.com\/castano\/blog\/wp-json\/wp\/v2\/categories?post=827"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.ludicon.com\/castano\/blog\/wp-json\/wp\/v2\/tags?post=827"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}