Skip to content

Infinity Scroll

The Infinity Scroll feature enables dynamic data loading as users scroll through the grid, efficiently managing large datasets by loading data in chunks and cleaning up unused data to optimize memory usage.

  • Dynamic data loading based on scroll position
  • Pre-loads data ahead of scroll direction
  • Efficient memory management by cleaning up off-screen data
  • Configurable chunk size and buffer sizes
  • Support for both known and unknown total data sizes
Loading...

To enable infinity scroll in your grid:

import { InfinityScrollPlugin } from '@revolist/revogrid-plugin-infinity-scroll';
const grid = document.createElement('revo-grid');
grid.plugins = [InfinityScrollPlugin];
grid.additionalData = {
infinityScroll: {
chunkSize: 50, // Number of rows per chunk
bufferSize: 100, // How many rows to keep in buffer
preloadThreshold: 0.75, // When to start loading more data (0-1)
total: 1000, // Optional: total number of rows
loadData: async (skip, limit, order, filter) => {
// Fetch data from your API
const response = await fetch(`/api/data?skip=${skip}&limit=${limit}&order=${JSON.stringify(order)}&filter=${JSON.stringify(filter)}`);
const data = await response.json();
return data; // You can return an array or { data, hasMore, total }
}
}
};
OptionTypeDefaultDescription
chunkSizenumberundefinedNumber of rows to load in each chunk. Can be set dynamically by the grid.
bufferSizenumberundefinedNumber of rows to keep in memory buffer. Can be set dynamically by the grid.
preloadThresholdnumber0.75When to trigger loading next chunk (0-1)
totalnumberundefinedTotal number of rows. If not provided, the plugin will prefill the source on the “fly” - scroll will be growing as you scroll.
loadDatafunctionrequiredFunction to load data chunks. Return row[] or { data: row[], hasMore?: boolean, total?: number }.

When your backend does not provide a fixed total, return hasMore from loadData. This is the most reliable way to tell the plugin when to stop requesting additional chunks.

grid.additionalData = {
infinityScroll: {
chunkSize: 100,
loadData: async (skip, limit, order, filter) => {
const response = await fetch('/api/data', {
method: 'POST',
body: JSON.stringify({ skip, limit, order, filter }),
});
const result = await response.json();
return {
data: result.items,
hasMore: result.hasMore,
};
},
},
};

You can also return total from loadData. When provided, the plugin updates its internal total size and uses it for source sizing and end detection.

The plugin automatically manages memory by:

  • Loading new chunks of data as the user scrolls
  • Maintaining a buffer of rows before and after the visible area
  • Cleaning up data that’s far from the current viewport

Infinity scroll keeps only a moving window of rows in the grid source. If you need a complete workbook, fetch the export dataset from the same backend API and pass those rows to ExportExcelPlugin instead of relying on the currently rendered viewport. The demo export button uses a temporary hidden grid with the same columns so the visible infinite-scroll grid is not disturbed while the workbook is generated.

  1. Choose Appropriate Chunk Size

    • Smaller chunks mean more frequent loading but less memory usage
    • Larger chunks mean fewer API calls but more memory usage
  2. Configure Buffer Size

    • Larger buffers provide smoother scrolling but use more memory
    • Smaller buffers save memory but might cause more loading events
  3. Optimize Data Loading

    • Implement server-side pagination in your API
    • Return only necessary data fields
    • Consider data compression for large datasets
  4. Handle Loading States

    • Show loading indicators during data fetching
    • Handle errors gracefully
    • Consider implementing retry logic for failed requests
grid.additionalData = {
infinityScroll: {
chunkSize: 50,
bufferSize: 100,
loadData: async (skip, limit, order, filter) => {
try {
// Show loading state
const response = await fetch(`/api/data?skip=${skip}&limit=${limit}&order=${JSON.stringify(order)}&filter=${JSON.stringify(filter)}`);
const data = await response.json();
// Hide loading state
return data;
} catch (error) {
console.error('Failed to load data:', error);
return []; // Return empty array on error
} finally {
// Hide loading state
}
}
}
};

This implementation provides efficient handling of large datasets while maintaining optimal performance and user experience.