Jonkman Microblog
  • Login
Show Navigation
  • Public

    • Public
    • Network
    • Groups
    • Popular
    • People

Notices by Nolan (nolan@toot.cafe), page 13

  1. Nolan (nolan@toot.cafe)'s status on Wednesday, 21-Aug-2019 17:09:46 EDT Nolan Nolan
    • wohali đź’Ż

    @wohali Does that work? I figure Google would notice the ruse.

    In conversation Wednesday, 21-Aug-2019 17:09:46 EDT from toot.cafe permalink
  2. Nolan (nolan@toot.cafe)'s status on Wednesday, 21-Aug-2019 17:09:16 EDT Nolan Nolan

    Or I could just do like Robin Rendle and say fuck it to AMP: https://www.robinrendle.com/notes/taking-shortcuts.html

    "Here’s my hot take on this: fuck the algorithm, fuck the impressions, and fuck the king. I would rather trade those benefits and burn my website to the ground than be under the boot and heel and of some giant, uncaring corporation."

    In conversation Wednesday, 21-Aug-2019 17:09:16 EDT from toot.cafe permalink
  3. Nolan (nolan@toot.cafe)'s status on Wednesday, 21-Aug-2019 17:06:44 EDT Nolan Nolan

    I could disable AMP, but then if what the author says is true, I'd lose that sweet juicy Chrome traffic. Even if I don't make money on my blog, I don't really see the point in disabling it. In effect I'm doing POSSE - Publish on Own Site, Syndicate Elsewhere.

    In conversation Wednesday, 21-Aug-2019 17:06:44 EDT from toot.cafe permalink
  4. Nolan (nolan@toot.cafe)'s status on Wednesday, 21-Aug-2019 17:02:52 EDT Nolan Nolan

    "Google Is Tightening Its Grip on Your Website" by Owen Williams https://onezero.medium.com/google-is-tightening-its-iron-grip-on-your-website-27e06b3150e0

    "AMP adoption is also the only way to gain access to Google’s Discover feed, which features articles on the page that appears when you open a new tab in the Chrome browser."

    Welp, now I know why I get traffic to my blog from this thing. WordPress defaulted to enabling AMP by default, and I never bothered to turn it off.

    In conversation Wednesday, 21-Aug-2019 17:02:52 EDT from toot.cafe permalink

    Attachments

    1. Google Is Tightening Its Grip on Your Website
      from Medium
      A new AMP update shows how the speed-boosting technology can infiltrate every corner of the internet
  5. Nolan (nolan@toot.cafe)'s status on Wednesday, 21-Aug-2019 13:11:30 EDT Nolan Nolan
    • Eitan K
    • Surma

    @eitanmk @surma We totally call it that... I have no idea what it means.

    In conversation Wednesday, 21-Aug-2019 13:11:30 EDT from toot.cafe permalink
  6. Nolan (nolan@toot.cafe)'s status on Wednesday, 21-Aug-2019 11:12:34 EDT Nolan Nolan

    In any case, I don't have the answers. Tufekci's article and of course the classic Nadia Eghbal paper remain sobering reads on the topic.

    - https://www.wired.com/story/altruism-open-source-fuels-web-businesses-love-to-exploit-it/
    - https://www.fordfoundation.org/media/2976/roads-and-bridges-the-unseen-labor-behind-our-digital-infrastructure.pdf

    In conversation Wednesday, 21-Aug-2019 11:12:34 EDT from toot.cafe permalink

    Attachments

    1. Altruism Still Fuels the Web. Businesses Love to Exploit It.
      from WIRED
      How open source projects reveal technology's free rider problem.
  7. Nolan (nolan@toot.cafe)'s status on Wednesday, 21-Aug-2019 11:10:21 EDT Nolan Nolan

    Again, though, I don't have a better solution. Are donation requests in postinstall scripts the answer? Maybe not. You're mostly shouting at the developers, i.e. the people at the bottom of the corporate hierarchy, i.e. the people who don't have the money. The CEO doesn't see the postinstall scripts. It ends up just being something that some dev has to clean up so that their CI builds don't take so long. (Not that I hold it against the maintainers for trying.)

    In conversation Wednesday, 21-Aug-2019 11:10:21 EDT from toot.cafe permalink
  8. Nolan (nolan@toot.cafe)'s status on Wednesday, 21-Aug-2019 11:06:22 EDT Nolan Nolan

    Not to say there aren't problems with the current incentive structure. In my case, I was mostly motivated by:

    1. Whatever tickled my interest (good)
    2. Career advancement (kinda good)
    3. Social media fame (not so good)
    4. Guilt (not good at all)

    I believe #2 and #3 especially leads to a kind of glut of "look at me!" projects, which are low on value but high on emojis and superlatives, and usually get abandoned when the ratio of Twitter likes to GitHub issues decreases.

    In conversation Wednesday, 21-Aug-2019 11:06:22 EDT from toot.cafe permalink
  9. Nolan (nolan@toot.cafe)'s status on Wednesday, 21-Aug-2019 11:02:48 EDT Nolan Nolan

    Thing is, I don't have a better economic model in mind. Assuming open-source authors actually got paid for their work, it would completely change the incentive structure and become something different. As soon as there's the possibility of payment, you start attracting the hucksters, the frauds, the get-rich-quicksters. Look at how much junk there is on YouTube. Look at SEO spam. Do I want that in npm? I'm not so sure.

    In conversation Wednesday, 21-Aug-2019 11:02:48 EDT from toot.cafe permalink
  10. Nolan (nolan@toot.cafe)'s status on Wednesday, 21-Aug-2019 11:01:00 EDT Nolan Nolan

    Ever since reading Jaron Lanier's "Who Owns the Future?" I can't help but see all this stuff as a product of the internet driving the cost of information to zero. People write software for free, corporations take advantage of it. Hey, free labor! Same thing happens to influencers, creators, basically everybody producing "content" online. There's all this free value laying around, so businesses would be stupid not to slurp it up. Same goes for our data feeding ML algorithms.

    In conversation Wednesday, 21-Aug-2019 11:01:00 EDT from toot.cafe permalink
  11. Nolan (nolan@toot.cafe)'s status on Wednesday, 21-Aug-2019 10:59:13 EDT Nolan Nolan

    I got really tired of that life. I don't really maintain my old libraries anymore. I ignore all the open issues and pull requests, except for every few months or so when I get bored. I mostly just work on Pinafore now, which is an AGPL-licensed webapp, so it's not mission-critical for anybody's business. It's just regular Mastodon users filing issues and PRs.

    In conversation Wednesday, 21-Aug-2019 10:59:13 EDT from toot.cafe permalink
  12. Nolan (nolan@toot.cafe)'s status on Wednesday, 21-Aug-2019 10:57:37 EDT Nolan Nolan

    I've spent many many hours of my life doing open-source work for free. I used to spend a lot more time on libraries and such (before working on Pinafore).

    One thing that continually got on my nerves was people emailing me to say thing like, "This project is mission-critical for our business! We really need you to fix this bug / investigate our issue / get on a call with us!" And it's like, seriously? I'm doing this for free. On my weekends and evenings.

    In conversation Wednesday, 21-Aug-2019 10:57:37 EDT from toot.cafe permalink
  13. Nolan (nolan@toot.cafe)'s status on Wednesday, 21-Aug-2019 10:55:01 EDT Nolan Nolan

    A recent article by Zeynep Tufekci in Wired clued me in to this GitHub thread on core-js adding a donation request to its postinstall script: https://github.com/zloirock/core-js/issues/548

    It's kind of incredible to me the amount of self-righteousness and entitlement in that thread. The real tragedy is that unpaid maintainers are doing all this work for free, and big corporations are happily exploiting it without a care in the world.

    In conversation Wednesday, 21-Aug-2019 10:55:01 EDT from toot.cafe permalink

    Attachments

    1. zloirock/core-js
      from GitHub
      Standard Library. Contribute to zloirock/core-js development by creating an account on GitHub.
  14. Nolan (nolan@toot.cafe)'s status on Tuesday, 20-Aug-2019 15:39:42 EDT Nolan Nolan
    • mattgen88

    @mattgen88 Yeah at least for my personal projects (e.g. Pinafore), that's just too much effort. I really may look into running it in a VM when I develop locally.

    In conversation Tuesday, 20-Aug-2019 15:39:42 EDT from toot.cafe permalink
  15. SpookyWorks (spankyworks@cybre.space)'s status on Tuesday, 20-Aug-2019 11:54:36 EDT SpookyWorks SpookyWorks

    Three billion devices run Java

    In conversation Tuesday, 20-Aug-2019 11:54:36 EDT from cybre.space permalink Repeated by nolan
  16. Nolan (nolan@toot.cafe)'s status on Sunday, 11-Aug-2019 13:42:48 EDT Nolan Nolan

    New blog post: "High-performance input handling on the web" https://nolanlawson.com/2019/08/11/high-performance-input-handling-on-the-web/

    In conversation Sunday, 11-Aug-2019 13:42:48 EDT from toot.cafe permalink

    Attachments

    1. High-performance input handling on the web
      By Nolan Lawson from Read the Tea Leaves

      Update: In a follow-up post, I explore some of the subtleties across browsers in how they fire input events.

      There is a class of UI performance problems that arise from the following situation: An input event is firing faster than the browser can paint frames.

      Several events can fit this description:

      • scroll
      • wheel
      • mousemove
      • touchmove
      • pointermove
      • etc.

      Intuitively, it makes sense why this would happen. A user can jiggle their mouse and deliver precise x/y updates faster than the browser can paint frames, especially if the UI thread is busy and thus the framerate is being throttled (also known as “jank”).

      In the above screenshot, pointermove events are firing faster than the framerate can keep up.[1] This can also happen for scroll events, touch events, etc.

      Update: In Chrome, pointermove is actually supposed to align/throttle to requestAnimationFrame automatically, but there is a bug where it behaves differently with Dev Tools open.

      The performance problem occurs when the developer naĂŻvely chooses to handle the input directly:

      element.addEventListener('pointermove', () => {
        doExpensiveOperation()
      })
      

      In a previous post, I discussed Lodash’s debounce and throttle functions, which I find very useful for these kinds of situations. Recently however, I found a pattern I like even better, so I want to discuss that here.

      Understanding the event loop

      Let’s take a step back. What exactly are we trying to achieve here? Well, we want the browser to do only the work necessary to paint the frames that it’s able to paint. For instance, in the case of a pointermove event, we may want to update the x/y coordinates of an element rendered to the DOM.

      The problem with Lodash’s throttle()/debounce() is that we would have to choose an arbitrary delay (e.g. 20 milliseconds or 50 milliseconds), which may end up being faster or slower than the browser is actually able to paint, depending on the device and browser. So really, we want to throttle to requestAnimationFrame():

      element.addEventListener('pointermove', () => {
        requestAnimationFrame(doExpensiveOperation)
      })
      

      With the above code, we are at least aligning our work with the browser’s event loop, i.e. firing right before style and layout are calculated.

      However, even this is not really ideal. Imagine that a pointermove event fires three times for every frame. In that case, we will essentially do three times the necessary work on every frame:

      This may be harmless if the code is fast enough, or if it’s only writing to the DOM. However, if it’s both writing to and reading from the DOM, then we will end up with the classic layout thrashing scenario,[2] and our rAF-based solution is actually no better than handling the input directly, because we recalculate the style and layout for every pointermove event.

      Note the style and layout recalculations in the purple blocks, which Chrome marks with a red triangle and a warning about “forced reflow.”

      Throttling based on framerate

      Again, let’s take a step back and figure out what we’re trying to do. If the user is dragging their finger across the screen, and pointermove fires 3 times for every frame, then we actually don’t care about the first and second events. We only care about the third one, because that’s the one we need to paint.

      So let’s only run the final callback before each requestAnimationFrame. This pattern will work nicely:

      function throttleRAF () {
        let queuedCallback
        return callback => {
          if (!queuedCallback) {
            requestAnimationFrame(() => {
              const cb = queuedCallback
              queuedCallback = null
              cb()
            })
          }
          queuedCallback = callback
        }
      }
      

      We could also use cancelAnimationFrame for this, but I prefer the above solution because it’s calling fewer DOM APIs. (It only calls requestAnimationFrame() once per frame.)

      This is nice, but at this point we can still optimize it further. Recall that we want to avoid layout thrashing, which means we want to batch all of our reads and writes to avoid unnecessary recalculations.

      In “Accurately measuring layout on the web”, I explore some patterns for queuing a timer to fire after style and layout are calculated. Since writing that post, a new web standard called requestPostAnimationFrame has been proposed, and it fits the bill nicely. There is also a good polyfill called afterframe.

      To best align our DOM updates with the browser’s event loop, we want to follow these simple rules:

      1. DOM writes go in requestAnimationFrame().
      2. DOM reads go in requestPostAnimationFrame().

      The reason this works is because we write to the DOM right before the browser will need to calculate style and layout (in rAF), and then we read from the DOM once the calculations have been made and the DOM is “clean” (in rPAF).

      If we do this correctly, then we shouldn’t see any warnings in the Chrome Dev Tools about “forced reflow” (i.e. a forced style/layout outside of the browser’s normal event loop). Instead, all layout calculations should happen during the regular event loop cycle.

      In the Chrome Dev Tools, you can tell the difference between a forced layout (or “reflow”) and a normal one because of the red triangle (and warning) on the purple style/layout blocks. Note that above, there are no warnings.

      To accomplish this, let’s make our throttler more generic, and create one that can handle requestPostAnimationFrame as well:

      function throttle (timer) {
        let queuedCallback
        return callback => {
          if (!queuedCallback) {
            timer(() => {
              const cb = queuedCallback
              queuedCallback = null
              cb()
            })
          }
          queuedCallback = callback
        }
      }
      

      Then we can create multiple throttlers based on whether we’re doing DOM reads or writes:[3]

      const throttledWrite = throttle(requestAnimationFrame)
      const throttledRead = throttle(requestPostAnimationFrame)
      
      element.addEventListener('pointermove', e => {
        throttledWrite(() => {
          doWrite(e)
        })
        throttledRead(() => {
          doRead(e)
        })
      })
      

      Effectively, we have implemented something like fastdom, but using only requestAnimationFrame and requestPostAnimationFrame!

      Pointer event pitfalls

      The last piece of the puzzle (at least for me, while implementing a UI like this), was to avoid the pointer events polyfill. I found that, even after implementing all the above performance improvements, my UI was still janky in Firefox for Android.

      After some digging with WebIDE, I found that Firefox for Android currently does not support Pointer Events, and instead only supports Touch Events. (This is similar to the current version of iOS Safari.) After profiling, I found that the polyfill itself was taking up a lot of my frame budget.

      So instead, I switched to handling pointer/mouse/touch events myself. Hopefully in the near future this won’t be necessary, and all browsers will support Pointer Events! We’re already close.

      Here is the before-and-after of my UI, using Firefox on a Nexus 5:

       

      When handling very performance-sensitive scenarios, like a UI that should respond to every pointermove event, it’s important to reduce the amount of work done on each frame. I’m sure that this polyfill is useful in other situations, but in my case, it was just adding too much overhead.

      One other optimization I made was to delay updates to the store (which trigger some extra JavaScript computations) until the user’s drag had completed, instead of on every drag event. The end result is that, even on a resource-constrained device like the Nexus 5, the UI can actually keep up with the user’s finger!

      Conclusion

      I hope this blog post was helpful for anyone handling scroll, touchmove, pointermove, or similar input events. Thinking in terms of how I’d like to align my work with the browser’s event loop (using requestAnimationFrame and requestPostAnimationFrame) was useful for me.

      Note that I’m not saying to never use Lodash’s throttle or debounce. I use them all the time! Sometimes it makes sense to just let a timer fire every n milliseconds – e.g. when debouncing window resize events. In other cases, I like using requestIdleCallback – for instance, when updating a non-critical part of the UI based on user input, like a “number of characters remaining” counter when typing into a text box.

      In general, though, I hope that once requestPostAnimationFrame makes its way into browsers, web developers will start to think more purposefully about how they do UI updates, leading to fewer instances of layout thrashing. fastdom was written in 2013, and yet its lessons still apply today. Hopefully when rPAF lands, it will be much easier to use this pattern and reduce the impact of layout thrashing on web performance.

      Footnotes

      1. In the Pointer Events Level 2 spec, it says that pointermove events “may be coalesced or aligned to animation frame callbacks based on UA decision.” So hypothetically, a browser could throttle pointermove to fire only once per rAF (and if you need precise x/y events, e.g. for a drawing app, you can use getCoalescedEvents()). It’s not clear to me, though, that any browser actually does this. Update: see comments below, some browsers do! In any case, throttling the events to rAF in JavaScript accomplishes the same thing, regardless of UA behavior.

      2. Technically, the only DOM reads that matter in the case of layout thrashing are DOM APIs that force style/layout, e.g. getBoundingClientRect() and offsetLeft. If you’re just calling getAttribute() or classList.contains(), then you’re not going to trigger style/layout recalculations.

      3. Note that if you have different parts of the code that are doing separate reads/writes, then each one will need its own throttler function. Otherwise one throttler could cancel the other one out. This can be a bit tricky to get right, although to be fair the same footgun exists with Lodash’s debounce/throttle.

  17. Nolan (nolan@toot.cafe)'s status on Thursday, 08-Aug-2019 10:30:30 EDT Nolan Nolan

    One of my favorite subtle changes that Firefox and Chrome made recently, and which I really appreciate, is making it so the backspace button doesn't go back anymore.

    Can't tell you how many times I've been editing a piece of text, lost focus, pressed the backspace button, then the browser navigated back and I lost my work. Alt+left exists; I don't need the backspace for this.

    In conversation Thursday, 08-Aug-2019 10:30:30 EDT from toot.cafe permalink
  18. Nolan (nolan@toot.cafe)'s status on Friday, 02-Aug-2019 10:50:53 EDT Nolan Nolan
    in reply to

    BTW I definitely see performance as an issue of access. If the CEO, the investors, and the developers are all using beefy desktops on fast connections, then they'll never notice what their less fortunate customers are experiencing on a hand-me-down Android phone or a busted laptop with a HDD. It's easy to miss performance problems if you're not paying attention to users with less money than you.

    In conversation Friday, 02-Aug-2019 10:50:53 EDT from toot.cafe permalink
  19. Nolan (nolan@toot.cafe)'s status on Friday, 02-Aug-2019 10:47:33 EDT Nolan Nolan

    I wonder if one reason more websites aren't accessible/performant is because of the underlying business model. If investors are always screaming at you about growth, then you'll focus on feature, features, features, to the detriment of less salient virtues like performance, accessibility, security, etc.

    In conversation Friday, 02-Aug-2019 10:47:33 EDT from toot.cafe permalink
  20. Nolan (nolan@toot.cafe)'s status on Friday, 02-Aug-2019 10:32:49 EDT Nolan Nolan

    One thing I find confusing about accessibility and compliance is that the WCAG publishes three different levels: AAA, AA, and A. https://www.w3.org/TR/WCAG20/#conformance-reqs

    Under the law today, can websites get sued for not meeting A? AA? AAA? Or is it a self-policing kind of thing? As in, do legal departments just set their own targets based on their appetite for risk?

    In conversation Friday, 02-Aug-2019 10:32:49 EDT from toot.cafe permalink
  • After
  • Before
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

Jonkman Microblog is a social network, courtesy of SOBAC Microcomputer Services. It runs on GNU social, version 1.2.0-beta5, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All Jonkman Microblog content and data are available under the Creative Commons Attribution 3.0 license.

Switch to desktop site layout.