-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server islands #945
Comments
(Friday evening, sorry if a bit rambly 😅) Overall looks good, am happy this is being tackled! One issue I have with the islands being a separate request is prop validation. I am unsure if there is an elegant solution to this. Simplest could be to document that you can't trust the props and make sure you are handling them accordingly, but personally that feels like a footgun that someone will inevitably trigger A band-aid-y feeling solution could be to sign the props so that when the server gets the island request it can verify it is "intended". But now you need the dev to define a secret key, or somehow store a session of it. Another could be that the islands are part of the same response and are swapped in with some simple js (It was addressed here as not being ideal, but I think it is better than passing untrusted props straight into user code) If I was to implement it, I would go with somehow signing the props. It would be quite weird and inconvenient so I hope there is a better solution that I am missing. |
Just wanted to add my 2 cents - I think this is a fantastic initiative! I do really agree with the sentiment Next/Vercel is pushing (something along the lines of "PPR will be the default rendering model for modern websites/web apps"). Coincidentally, I've been deeply researching this area over the last few days in an attempt to handroll my own PPR (or whatever you want to call it) with Astro. Based on this I wanted to share some general thoughts/findings:
However, thought I'd still bring it up because there's likely some great lessons to be found in the spec or in example implementation guides, such as this one from CloudFlare Workers: https://blog.cloudflare.com/edge-side-includes-with-cloudflare-workers.
My reasoning is because while there could be some default caching behaviour, surely the developer should have the final say on what is/isn't cached? Let me use my own real-world use-case to explain. I'm building an Astro web app which is highly dynamic in 2 dimensions:
Regarding the first point: if the user visits from desktop, a desktop version of the site is shown (such as the web app shell consisting of a sidebar navigation menu and top navbar). If the user visits from an iOS device, they see an iOS-themed mobile version of the app shell, e.g. with bottom tabbed navigation instead of a sidebar. And similarly, Android users will be served a material design-themed version of the mobile shell. The key point here is that each of these 3 "shells" should be fully cacheable and the cached version served to users based on what device they're viewing from as there's no personalised UI being served). So in these situations, I would need some way to mark the server-island as "cacheable", right? Now compare this to the second point on an island with personalised data: none of this should be cached as it's different per user/could contain sensitive info? So in this case, I'd need to mark such islands as "non-cacheable"? |
I was thinking about the same thing, and do think that signing the props would be the best approach. It could be transparent to the user. The secret key could be auto-generated during build, and be cached on the server. The props would be signed with HMAC at build or SSR time, and the signature included as an attribute on the element. Next.js does something similar with its generated preview token. |
@jkhaui It doesn't make sense to tie this to a proprietary Vercel feature, even if it open to other frameworks. Any solution should be one that users can deploy to any hosting platform. ESI is an option. The main drawback that I see on it is that there's no out-of-order rendering. The whole page blocks while the includes are loaded, so you don't get the benefit of fast loading of the shell. What could work, if you were willing to sacrifice caching, would be to implement something similar to the proposed solution (i.e. with a little bit of JS to do the replacement), but with the actual content streamed in the same response. This could be done with edge middleware. On 2 you make a good point. I think the idea would be that the shell could be SSG or SSR, and could still be cached. In your example you could render it dynamically, but send |
I would love if there were some options on how it is handled, something like:
Different adpaters could then take advantage of platform specific features to make the best implementation, with a fallback on a generic http-based one. Although, from what I've heard, to make (I so, so wish there was something better than iframes for async loading html without js, would solve so many issues) |
@Tc-001 I really like the idea of supporting |
@ascorbic Yes, basically that. |
What would be the advantage of the I could see an argument that by doing it in an edge CDN you are kick-starting the request earlier than if it's delayed until the client-side JS runs. |
Probably nothing else really, other than it being the placeholder for a split second. But even then you can have the browser cache the island (if GET is chosen) and it would almost immediately switch to the correct one.
…On Tuesday, June 11th, 2024 at 22:13, Matthew Phillips ***@***.***> wrote:
What would be the advantage of the server:dynamic version of this idea aside from not requiring JavaScript? The amount of JavaScript in server:defer is going to be very minimal; if that's the only reason then I'm not sure that it's worth it.
I could see an argument that by doing it in an edge CDN you are kick-starting the request earlier than if it's delayed until the client-side JS runs.
—
Reply to this email directly, [view it on GitHub](#945 (comment)), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/ANK5LH3PW4ZMEB7AR6CMK2DZG5D4HAVCNFSM6AAAAABI7I5HM6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNRRGQ2DANJSGQ).
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Hmmm... a fun thing to add could be a way to "refresh" an island. <!-- would is:inline be needed? Here I assume hoisted scripts wouldn't work. -->
<script is:inline>
const refreshButton = ...
refreshButton.addEventListener("click", () => {
refreshButton.dispatchEvent(new Event("astro:island:refresh"));
})
</script> That re-fetches the island and replaces (or diffs) the children. This raises a question of what happens if the island loads from a newer build than the rest of the page, as styles and resources would probably not align. |
I think the need to block on loading the islands in |
I was thinking about this in regards to the private key idea floated above, what happens when you redeploy and the key no longer matches? Vercel has a feature called skew protection for this. You include a header and it makes sure the request get routed to the right version of function. @ascorbic does Netlify have any similar feature to your knowledge? We could allow some configuration for adapters for this. |
This would only happen if the deploy occurred between the time when the shell starts loading and the request is sent for the islands, so I think it's a marginal edge case. The shell would always be up to date because hosts all invalidate the cache between deploys.
Not at the moment, but I think it's planned. This is only an issue for Next.js because they do SPA navigation, so a tab could be sitting open for a long time and then the user tries to navigate. This would then request the new page data and it's a new deploy with a differtent deploy id in the URL. |
Our most used adapter is the regular Node.js adapter. So that means people are deploying it via Docker or just manually on a VPS or something, and those types of setups are probably less likely to have synchronized static / server deployments, I imagine. Still probably an edge case, and not something we can likely help with, though. |
Maybe the user/adapter could optionally provide a fallback URL that is versioned to the correct deploy. I do think that if the platform offers something like skew protection, there should be a system that allows an adapter to take advantage of it. Could actually be something as simple as it being able to overwrite |
@matthewp In my case, I'll often let caddy handle static files directly, and only proxy dynamic calls (SSR) to nodejs adapter. Just my 2cents. |
server-island-placeholder.mp4Ignore the content shift caused by my weak Tailwind skills. The top-right is the user's wishlist, cart, and avatar. Here I'm using placeholder content as the fallback. I think you don't really want this generic placeholder as fallback though, you probably want it to look the same, but with 0 as the counts for Wishlist and Cart. And you want the avatar to be generic as the fallback and then become the user's avatar once it loads. Do people agree with that? What this means though, is that you really want to use the same component as both the fallback and the deferred component. You wind up writing something like this: <PersonalBar server:defer>
<PersonalBar slot="fallback" placeholder />
</PersonalBar> Which is repetitive and weird. The component itself has to be aware that some times it is loading data and some times it's not. Not sure what to do about this. Any suggestions appreciated. |
Hmmm... there could be a ---
const cartCount = !Astro.isStatic && await db...
---
<div>
{cartCount && <div>{cartCount}</div>}}
<Icon />
</div> With this aproach, if it is a diff instead of an innerhtml replacement, you could even add some animations to smoothly show the count. But even with this, it could be quite jarring seeing the count appear/update after each navigation. Maybe there could be an option ( |
I suggest something like this in the RFC: an |
I think there shouldn't be any big issues because the prerendered state would be the same a fresh user in a regular SSR app would see. So the flag still has a use. Maybe it could be a regular |
Using <Cart server:defer /> /* renders with import.meta.env.DEFERRED */
<Cart /> /* does not renders with import.meta.env.DEFERRED */ That would break if, for example, the component stashed the value in a variable. |
Also it's not clear to me yet how often do components want to render their own deferred content vs the caller doing so. It might be most of the time the component should do it themselves, or it could be a more rare thing. |
Couldn't this be handled with requesting the page with an additional header or query param, the frontmatter is still ran as if the page is being rendered as a whole, but only the single component referenced is actually rendered returned in the respond similar to The consequence of this of course could be if the page is large, there could be expensive calls which are re-ran for every request. Maybe a half way house would be something like (taking inspiration from actions): ---
import { defineServerIsland } from "astro:islands";
export const prerender = true;
const post = await getPost(Astro.params.slug);
const likeIsland = defineServerIsland({
name: 'like-island',
values: {
//cached values for the server island on regeneration
post,
},
getProps: async ({ post }) => {
// call a mailing service, or store to a database
// access request specific information
const user = await getUser(Astro.cookies.get('session'))
const liked = await user.getLikedPost(post.id)
return { post: post.id, liked };
},
});
---
<Like server:defer={likeIsland} /> In this example |
I'm new to Astro so apologies if this is previously discussed or not helpful. If a lot of components are going to have loading, error, and loaded states would it make sense to implement something like the cell pattern in Redwood.js? https://redwoodjs.com/docs/tutorial/chapter2/cells#our-first-cell export const QUERY = gql`
query FindPosts {
posts {
id
title
body
createdAt
}
}
`
export const Loading = () => <div>Loading...</div>
export const Empty = () => <div>No posts yet!</div>
export const Failure = ({ error }) => (
<div>Error loading posts: {error.message}</div>
)
export const Success = ({ posts }) => {
return posts.map((post) => (
<article key={post.id}>
<h2>{post.title}</h2>
<div>{post.body}</div>
</article>
))
} This is a React component, but makes handling the different loading states simple. Perhaps adding a conditional element to slots that only displyaed content when a condition was matched? You could then show/hide slots as the loading status changed. ---
const { title } = Astro.props;
---
<div id="content-wrapper">
<h1>{title}</h1>
<slot if-state="loading" />LOADING<slot />
<slot if-state="loaded" />LOADED<slot />
<slot if-state="error" />ERROR<slot />
</div> |
@Tc-001 *After rethinking about it, opening a refresh event that can be called by user logic, might be a good design that does not try to pack all use cases logic in the astro framework. |
@wassfila Yeah this is something I'm been trying to understand better as well. There is definitely overlap. I see the biggest advantages of server islands being:
I think in some scenarios you will want to use both server and client islands together. There's nothing stopping you from having a It might also be possible to have client and server directives on the same component, but that's not something I've tackled yet. |
Right, I did not think of that. It's true that something like github logo won't change instantly, so there's room for both.
I see, wow, lots of perspectives,... thanks for the answer. |
At first I was mildly intrigued by this RFC. But with that in mind? I love it. Having two levers to defer - data and interactivity is so cool. |
I assume this would work with With prerender, |
Without any level of frontmatter and using static props only it would be exactly the same as just rendering the component directly on the server. You'd gain nothing. For example, the demo was to load in the users basket and account via a secondary request, that implies running of some request specific code to identify the user and retrieve their basket. I think the questions posed is how do you differentiate between a server render and a render on request inside the deferred component. |
I've been waiting for something like this! I'm hoping there's support for framework components in Server Islands. |
Great to see this RFC. We are currently working on re-implementing a large B2B company and e-commerce platform with Astro and would benefit greatly from server islands from my perspective. Currently we use But from my point of view one more thing would be important: How can server islands be cached locally so that there is no flickering when switching pages? What I mean by flickering: Every time you switch to a new page, the user button is loaded as a server island, the fallback icon appears first and when the backend component is there, the correct icon is displayed. If you can use a cached state instead of the fallback icon, there is no flickering until the server island is loaded. In a The name “SWR” is derived from |
Even with SWR there's going to be a flicker, the fallback is going to be visible while the fetch is happening. Even with aggressive caching there's still the time the request takes to get to the CDN and back, then update the DOM. I don't know if there's anything we can do about that. Using ViewTransitions is maybe one solution you can take. |
🤔 You could use an inline script and session storage, as long as the swap happens before the page loads there shouldn't be a flash. |
Inline script will need to take note on Content Security Policy (CSP), whether we need to add "inline script integrit". |
There's now a preview release available to try: withastro/astro#11305 Note that it only works in dev mode. Please leave any feedback here and not in the PR. Thank you! |
Stage 3 RFC first draft is up: #963 |
Closing. #963 Please use the PR to further continue possible discussions. |
Summary
Allow islands within a prerendered page to be server-rendered at runtime.
Background & Motivation
Often a page may be mostly static, but with only parts that need to be rendered on-demand. For example, a product page might need to show stock levels or personalised recommendations. Currently the only option for these is to either render the whole page on demand, or to render the dynamic parts on the client. This proposal introduces the concept of deferred islands, which are not prerendered, but rather server rendered on demand at runtime.
Next.js is working on a solution called partial pre-rendering, which allows most of the page to be prerendered, with individual postponed parts rendered using SSR on-demand. The implementation is quite different from what I propose for Astro, but the concept is similar.
Goals
Goals
Possible goals
Non-goals
Example
A component would be deferred by setting the
server:defer
directive.The
"fallback"
slot can be used to specify a placeholder that is pre-rendered and displayed while the component is loading.The page can pass props to the component like normal, and these are available when rendering the component:
The component itself does not need to do anything to support deferred rendering, so it should work with any existing component. However deferred components can optionally use special powers, and can detect if they were deferred by checking the
Astro.deferred
prop. This means that it was deferred at build time but is now being rendered on-demand.The special powers are because during deferred rendering, a component is rendered like a mini page. This means it can use features such as
Astro.cookies
, and setting headers onAstro.response
. TheAstro.url
andAstro.request.url
are from the original page, and are passed in the request along with the props.Implementation
When rendering the static page, postponed elements would not be rendered, and instead would render an
<astro-island>
containing any placeholder. The<astro-island>
would embed the serialized props, as well as the URL for the deferred endpoint.When the page has loaded, a request would be made to each deferred endpoint (see "GET vs POST" below for considerations). This request would pass all of the props and other serialized context.
On the server, the component would be rendered effectively in a thin wrapper page that decoded and forwarded the props, and rewrites the Astro global values.
When the browser has loaded the response from the endpoint, it would use it to replace the content of the island.
The runtime for replacing the deferred islands would be in an inline script tag. A simplified version without error handling could look like this:
GET vs POST
One of the unanswered questions is whether to use GET or POST requests for the deferred component endpoints. The benefit of the POST is that it can send arbitrarily large request bodies. The benefits of GET are that they are cacheable and can be preloaded in the page head. Some options are:
Why not streaming?
A alternative approach would be to send the postponed components in the same response as the initial shell. This is how Next.js PPR is currently implemented. This has some benefits, but I think they are outweighed by drawbacks in most cases. Astro has always been static-first, and I think that approach is best here too.
Primarily, a prerendered, static page is easily cacheable, both in the browser but also in a CDN. This is not the case when the deferred data is in the same response. The benefits of this are that the static part can be cached at the edge, near to users, with a very fast response time. The deferred content can be rendered and served near to the site's data without blocking rendering of the rest of the page. If you want to stream the deferred content in the same response, you either have to render everything at the origin and take the hit on distance from users, or render it all at the edge and take the hit on distance from your data. In some cases rendering everything at the edge is fine (e.g. if there's no central data source or API access), and Astro already supports that.
You can work around this with logic at the edge to combine a local cached shell and a stream of the updates from the origin, and edge middleware to do this could be a helpful option. It still prevents the use of the browser cache though, because it can't make conditional requests for the prerendered page - the whole things needs to be sent on every request in case the deferred data has changed.
The text was updated successfully, but these errors were encountered: