Infinite Scrolling on Hugo List Pages

0 💕 0 ↩️ 0 🔄

As my website’s content grows, Hugo‘s list pages will grow as well. Without pagination, the list items in my site’s section and tag list pages would keep growing, thus increasing in size. In effect, a visit to any of my site’s list pages will essentially download every piece of content I’ve published if pagination wasn’t used.

However, I’ve desired my site’s presentation to mimic a social media timeline, and a traditional paginated site–with navigation buttons–breaks that mimicry. Infinite scrolling has many consequences for usability; unusable footers and breaking the history are just two of the potential problems that may arise. Despite the potential downsides, I’ve determined that infinite scrolling makes sense for my site and decided to implement it.

Goals

Steps Taken

I’ve already fleshed out much of the JavaScript thought process in Thoughts on Infinte Scroll Pagination. Much of what was left is to actually make the code work and to just finish the implementation.

Paginate the Homepage and List Pages

While Mike Roibu opted to use Hugo’s Pagination support but iterated over the entire range in Infinite Scrolling Pagination in Hugo Website, I’ve decided to just use Hugo’s Pagination by default, but utilize customized navigation instead of the provided internal template.

As such, how I’ve utilize Hugo’s Pagination is basically as described from their website:

<!-- Create the paginator before it's used -->
{{- $paginator := .Paginate .Site.RegularPages -}}

<!-- Previous page link; hide if there is no previous page -->
{{- if .Paginator.HasPrev -}}
  <a class="paginator-previous-page" href="{{ .Paginator.Prev.URL }}">Previous</a>
{{- end -}}

<!-- Iterate over all pages -->
<div class="all-entries">
  {{- range $paginator.Pages -}}
    <!-- Do however you wish to display individual pages -->
  {{ end }}
</div>

<!-- Next page link; hide if there is no next page -->
{{- if .Paginator.HasNext -}}
  <a class="paginator-next-page" href="{{ .Paginator.Next.URL }}">Next</a>
{{- end -}}

Implement the Fetch in JavaScript

Without JavaScript, the website provides paginated pages of list items that users may navigate through by using the Previous and Next anchors, if present on the page. What JavaScript needs to accomplish, then, is to fetch the next page and append the new list items to the bottom of the current page. I used the Fetch API for this purpose:

async function loadNextListPage() {
  let currentPagePaginationContainer = document.querySelector(".all-entries");
  let currentPageNextLink = document.querySelector(".paginator-next-page");
  let nextPage = currentPageNextLink.getAttribute("href");

  /* Try to get the next page asynchronously */
  console.log("pagination: Fetching next page...");
  try {

    let response = await fetch(nextPage);

    if (response.ok) {

      let data = await response.text();
      console.log("pagination: Next page fetched!");

      /* Get the new Pagination items of the next page and append */
      let parser = new DOMParser();
      let nextPageDom = parser.parseFromString(data, "text/html");
      console.log("pagination: Data parsed into temporary DOM!");
      let newPaginatorItems = nextPageDom.querySelector(".all-entries").children;
      for (i = 0; i < newPaginatorItems.length; i++) {
        if (i === 0) newPaginatorItems.item(i).removeAttribute("open");
        currentPagePaginationContainer.appendChild(newPaginatorItems.item(i));
      }
      console.log("pagination: New items added!");

      /* Update the history to the last page loaded */
      let state = { 
        "status": "pagination: New list items added",
        "previousPage": window.location.pathname + window.location.search,
        "currentPage": nextPage };
      history.pushState(state, "", nextPage);
      console.log("pagination: New history pushed - ", state);

      /* Update the next page link on the current page */
      let newNextLink = nextPageDom.querySelector(".paginator-next-page");
      if (newNextLink) { // When there is no next page, newNextLink is 'null'
        currentPageNextLink.setAttribute("href", newNextLink.getAttribute("href"));
        console.log("pagination: Updated next page link!");
      } else { // When there are no other pages, remove the next page link
        if (currentPageNextLink.parentNode)
          currentPageNextLink.parentNode.removeChild(currentPageNextLink);
        console.log("pagination: Removed next page anchor!");
      }

    } else throw "response.ok was unsuccessful with status '" + response.statusText
        + " (" + response.status + ")'";

  } catch (error) {
    /* A fetch() promise will reject with a TypeError when a network error is
    encountered or CORS is misconfigured on the server-side, although this usually
    means permission issues or similar — a 404 does not constitute a network error,
    for example. 
    
    There are two places where awaited Promises may happen (and, ergo, will reject
    and throw an exception): the actual fetching of the next page and reading the
    response stream and converting it into a string. Considering a network error
    would preclude any access to my site and CORS is not utilized, any errors should,
    essentially, fall out of scope. I am not 100% certain when reading the response
    stream may fail. However, if something does throw an exception, I should know
    about it.
    
    In addition, this function will throw an exception if the Promises from the
    Fetch API still resolve, but the response was considered unsuccessful (status
    not in range of 200-299). I am not 100% certain what would cause an unsuccessful
    status with consideration of how my site is currently maintained and generated.
    However, should the Fetch's API return an unsuccessful response for any reason,
    I also want to know about it.*/

    console.error("pagination: Caught a Fetch API exception! - ", error);
  }
}

In practice, when this function is run, it downloads the next page, much like how a user would download the next page when clicking the Next anchor. The download cost without JavaScript and with JavaScript are the same, which makes this method download a bit more than implementations that can just request only the new items. Where it differs is once the next page is downloaded, instead of loading the next page the function parses it through a DOMParser which returns a Document. From there, standard DOM manipulation functions append the new items to the current page.

As I’ve commented, there are four situations where the fetching will fail and content won’t load: a network error, misconfigured Cross-Origin Resource Sharing (CORS), something happens reading the response stream, or the Promise resolves but the response was unsuccessful. In each of those situations none of them should affect my site–mainly because if there is a network error my site would not be accessible in the first place. Put another way, I’m not confident on what exceptions I may realistically encounter that I will need to catch. Right now, all it does is catch any errors and report them in the console.

You’ll notice in the function that once the new items have been appended to the current page, the current next page link is updated with the next next page and the history is updated using the History API:

/* Update the history to the last page loaded */
let state = { 
  "status": "pagination: New list items added",
  "previousPage": window.location.pathname + window.location.search,
  "currentPage": nextPage };
history.pushState(state, "", nextPage);
console.log("pagination: New history pushed - ", state);

/* Update the next page link on the current page */
let newNextLink = nextPageDom.querySelector(".paginator-next-page");
if (newNextLink) { // When there is no next page, newNextLink is 'null'
  currentPageNextLink.setAttribute("href", newNextLink.getAttribute("href"));
  console.log("pagination: Updated next page link!");
} else { // When there are no other pages, remove the next page link
  if (currentPageNextLink.parentNode)
    currentPageNextLink.parentNode.removeChild(currentPageNextLink);
  console.log("pagination: Removed next page anchor!");
}

To update the Next anchor to the next page’s link, I just replaced the current href attribute with the attribute in the downloaded page. The history–again, using the History API–was updated using history.pushState(). This adds the pages downloaded by the function to the session history. Alone, this does not reload the page whenever the user presses Back on their web browser. More later on will basically bring the expected behavior back to as close as possible.

Call loadNextListPage() on Next Anchor’s onclick

Right now, loadNextListPage() is created but nothing is calling it; as such, the page is still functioning as a traditional paginated experience. The first thing needed is for the function to be called whenever a User presses the Next anchor. I don’t call the function directly, however. Instead, I use two functions:

/* Add onclick to '.paginator-next-page' if Fetch API is available */
function checkForFetchSupport() {
  if (("fetch" in window) && (document.querySelector(".paginator-next-page"))) {
    console.log("pagination: Fetch API available");
    document.querySelector(".paginator-next-page").setAttribute("onclick",
    "manuallyLoadNextPage();");
  }
}
addLoadEvent(checkForFetchSupport);

/* The onclick event for '.paginator-next-page' */
function manuallyLoadNextPage() {
  event.preventDefault(); // Disable the default href click event
  loadNextListPage();
}

The first function checks to see if the Fetch API is available in the user’s browser. From there, using principles of Unobtrusive JavaScript, it programmatically adds manuallyLoadNextPage()" to the Next anchor as an onclick event. Adding the onclick event in this manner means that the button will only call the function if the Fetch API is available and will fall back to loading the next page traditionally if clicked.

You’ll note that I’m using addLoadEvent(). This came from Lee Underwood’s Using Multiple JavaScript Onload Functions. I want it to check for the availability of the Fetch API and add the onclick event only once the page loads.

The reason for using manuallyLoadNextPage() instead of calling loadNextListPage() directly is so that the default event for an anchor can be disabled. Otherwise, when clicking the Next anchor, it will load the next page traditionally instead of handling the onclick event.

Use IntersectionObserver to Call loadNextListPage()

To review, at this point I’ve utilized Hugo’s Pagination to paginate the page as a graceful degration and programmatically added an onclick event on the Next anchor to manually load the next items into the page. To actually implement the infinite scrolling behavior, I utilize the power of the Intersection Observer API to call loadNextListPage() once something (in this case, my footer), becomes visible on the screen:

/* Use the Intersection Observer API to get new items when scrolled down */
function observeForInfiniteScroll() {
  let nextPageObserver = new IntersectionObserver((entries, observer) => {
    /* This is the callback function */
    let firstEntry = entries[0]; // I'm not using multiple thresholds, so 0 suffices
    if (firstEntry.isIntersecting) { // Basically if the footer is in view, run
      if (!document.querySelector(".paginator-next-page")) {
        observer.disconnect(); // Stop the observer when there's no more pages
        console.log("pagination: IntersectionObserver disconnected!");
      }
      else loadNextListPage();
    }
  }, {
    rootMargin: "0px 0px 80px 0px"
  });
  nextPageObserver.observe(document.querySelector(".icon-attribution"));
}
if (("IntersectionObserver" in window) && ("fetch" in window)) {
  console.log("pagination: IntersectionObserver and Fetch API available")
  addLoadEvent(observeForInfiniteScroll);
}

The constructor for IntersectionObserver takes two arguments: a callback function (I used an anonymous function, here) and an options object (I added an option to basically trigger the callback once the footer is 80 pixels away from the viewable area of the screen). Once you’ve given both of those, just call .observe() with the element that you wish the observer to watch for.

Like the code above, I’m checking to see if both the Intersection Observer API and the Fetch API is available in the user’s browser. If they are, I’m using addLoadEvent() to call the function once the page fully loads.

Now that this is complete, infinite scroll is finally implemented. You’ll notice as you scroll, once it gets to a certain point, the callback will be called and new items will be appended to the current page. This behavior will continue until it gets to the last page (signaled when the Next anchor is removed), at which point the Intersection Observer will disconnect and stop observing my footer. The implementation is finished, but there’s still one thing to tidy up.

Fixing the Back Button

When you scroll through any list page on my site, go anywhere else (like to see a full post or to a different site altogether), then press back, the last page will be loaded for you. For example, if you started from page 1, loaded page 2, went somerwhere else, then went back; the browser will load–essentially start–from page 2. If you keep pressing Back, however, it will not load the previous entries; it will just pop through all the history states that were made when I used history.pushState() to update the history.

To mitigate this problem, I’ve added this to my JavaScript:

/* When a 'popstate' event triggers--like pressing the Back Button--reload the
page. Do note that this isn't particularly ideal, in my opinion, but it's better
than what would happen without it--the history state updating without actually
reloading the page, thus causing posts to potentially not be there. However, in
the context of social media using, the likelihood of scrolling up after scrolling
down is, in my opinion, unlikely as a use case. */

window.onpopstate = function(event) {
  document.querySelector(".all-entries").scrollIntoView(true);
  history.go();
  console.log("pagination: Previous page loaded!");
}

Do note that this is not ideal; it breaks what a user expects to happen with the Back button because instead of going back to whatever the user was viewing before getting to my pages, the browser will go back and reload the previous pages that were called. Essentially they’ve never clicked on the Next anchor to view more content (because it was essentially done automatically), but they still have to click Back as many pages as they ‘viewed.’

However, I feel this is better behavior than doing nothing, thus having nothing but the address bar updating happen when the user presses the Back button.

Final Thoughts

While the Back button behavior is (somewhat) mitigated, there are still other problems that are endemic to all infinite scroll implementations: no way to return to the same position (my mitigation with the Back button doesn’t fix that at all), the scrollbar becomes useless, the footer also becomes useless, potential slowdown due to browser memory use once things get really large, and other problems. However, as I am presenting my site much like a social media timeline feed, I think infinite scroll is an appropriate way to display a summary of my posts.

The nice thing is if I don’t want to use infinite scrolling anymore (or if I’m persuaded to not use it anymore), I simply have to remove the IntersectionObserver or the entirety of the JavaScript. That will leave me back to a traditionally paginated list page, using just Hugo’s Pagination feature.

All in all, I’m pretty pleased with how this turned out. I get the consequences of implementing infinite scrolling and I feel like I’m making good decisions, but I’m relieved that should I want to reverse it, it’ll be easy.

View on Twitter View on Mastodon