I've been on a somewhat binge to move a bunch of stuff to self-hosting at home. Yesterday I finally completed my self-hosted Forgejo instance at home, together with Linux, Windows (via VM) and macOS (via Mac Mini) runners/workers for CI/CD, so everything finally lives in-house (literally), instead of all source code + Actions being on GitHub but the infrastructure actually living locally.
This is probably the first time I felt vindicated with my self-hosting move literally the day after I finished the migration, very pleasant feeling. Usually it takes a month or two before I get here.
The idea of a homelab is appealing to me, but then I actually start building one and get tired of it quickly. When I’ve been fixing broken systems at work all day I don’t really want to have to be my own sysadmin too.
I’ve got a nice and powerful Minisforum on my desk that I bought at Christmas not even switched on.
I've tried for 15 years to have my homelab, but always get lost in the complexity after a year or so, in the past. About 3 years ago I gave NixOS a try instead for managing everything, which suddenly made everything easier (counter-intuitively perhaps) as now I can come back after months and still understand where everything is and how it works after just reading.
Setting up Forgejo + runners declaratively is probably ~100 lines in total, and doesn't matter I forget how it works, just have to spend five minutes reading to catch up after I come back in 6 months to change/fix something.
I think the trick to avoid getting tired of it is trying to just make it as simple as humanly possible. The less stuff you have, the easier it gets, at least that's intuitive :)
Yup this is what I've got up and running recently and it's been awesome.
My setup is roughly the following.
- Dell optiplex mini running Proxmox for compute. Unraid NAS for storage.
- Debian VM on the Proxmox machine running Forgejo and Komodo for container management.
- Monorepo in Forgejo for the homelab infrastructure. This lets me give Claude access to just the monorepo on my local machine to help me build stuff out, without needing to give it direct access to any of my actual servers.
- Claude helps me build out deployment pipeline for VMs/containers in Forgejo actions, which looks like:
- Forgejo runner creates NixOS builds => Deploy VMs via Proxmox API => Deploy containers via Komodo API
- I've got separate VMs for
- gateway for reverse-proxy & authentication
- monitoring with prometheus/loki/grafana stack
- general use applications
Since storage is external with NFS shares, I can tear down and rebuild the VMs whenever I need to redeploy something.
All of my docker compose files and nix configs live in the monorepo on Forgejo, so I can use Renovate to keep everything up to date.
Plan files, kanban board, and general documentation live adjacent to Nix and Docker configs in the monorepo, so Claude has all the context it needs to get things done.
I did this because I got tired of using Docker templates on Unraid. They were a great way to get started, but it's hard to pin container versions and still keep them up-to-date (Unraid relies heavily on the `latest` tag). Moving stuff over to this setup bit-by-bit and I've been really enjoying it so far.
Unless you actually need hardware (local LLM host, massive data transformation jobs), it is also easy to get into the many machines trap. A single old laptop, N97, optiplex, etc sitting in a corner is actually a huge amount of computer power that will rival most cloud offerings. Single machine can do so much.
Thanks. Yeah, I've probably been overcomplicating it before. I was running Kubernetes on Talos thinking that at least it would be familiar. Such power tools for running simple workloads on a single node is inviting headaches.
My Raspberries (and OrangePi) have better availability than github, and if were to be down I'd be out of power/internet and wouldn't be able to work much anyway.
I recently did this as well and one of the things that has struck me is just how fast Actions are compared to Github!
That said, I've got Linux and macOS setup with a Mac Mini (using a Claude-generated Ansible task file), but configuring a Windows VM seemed a bit painful. You didn't happen to find anything to simplify the deployment process here, did you?
The only problem I've found with Forgejo is a lack of fine grained permissions and also the lack of an API for pulling action invocations. The actions log api endpoints are present in gitea from what I can tell.
I self-host Forgejo for personal and indie-startup purposes, and like it well enough.
The downside with that is it misses one of the key purposes of GitHub: posturing for job-hunting/hopping. It's another performative checkbox, like memorizing Leetcode and practicing delivery for brogrammer interviews.
If you don't appear active on GitHub specifically (not even Codeberg, GitLab, nor something else), you're going to get dismissed from a lot of job applications, with "do you even lift, bro" style dissing, from people who have very simple conceptions of what software engineers do, and why.
There is a fairly straightforward feature in Forgejo to sync your repos to Github, if that's what you want to do. It's not perfect, of course, but should help to advertise your projects and keep your activity heatmap green.
I mostly use Forgejo for my private repos, which are free at Github, but with many limitations. One month I burned all my private CI tokens on the 1st due to a hung Mac runner. Love not having to worry about this now!
> If you don't appear active on GitHub specifically... you're going to get dismissed from a lot of job applications
Sometimes wonder if my coursemates back in the days, who automated commits to private repos just to keep the green box packed, actually got any mileage out of it.
I get that. To counter it I usually try to have at least one public repo on my Forgejo instance and link to that on my resume/LinkedIn. It helps that I'm angling for security/infra positions so the self-hosting aspect actually helps but even without that I would imagine it signals something. Maybe not ideal for the most mainstream jobs (whatever that even means...), but I suspect some people will be intrigued by the initiative.
Edit: to the "do you even lift bro", the response becomes "yeah man, I've built my own gym - oh, you go to Planet Fitness? Good luck."
Instability aside I found several things about GitHub awkward, annoying, or missing features so I spent a month building my own. I think we're going to be seeing a lot more of this.
Interesting. I speculated not long ago that Microsoft is really taking a dive here, and other companies may look to provide better alternatives to GitHub, as one idea. Today I read your comment about self-hosting here; while that is not quite what I compared or had in mind, it is interesting to read about it, of people who go that route. Microsoft is really putting themselves into trouble in the last year or two. Some things no longer work, so much is clear here.
https://mrshu.github.io/github-statuses/ says they are down to 88.15% uptime. Even when you consider uptime of individual components, their best is 99.78%, so two nines.
The intersection of uptime across every possible service they offer isn't a particularly great metric. I get the point that they are doing badly, but it makes it look worse than I think it really is.
What I would like to see is a combined uptime for "code services", basically Git+Webhooks+API+Issues+PRs, which corresponds to a set of user workflows that really should be their bread & butter, without highlighting things you might not care about (Codespaces, Copilot).
A service's availability is capped by its critical dependencies; this is textbook SRE stuff (see Treynor et al., The Calculus of Service Availability). Copilot may well be on the side of it (and has the worst uptime, dragging everything down), but if Actions depends on Packages then Actions can be "up" while in reality the service is not functional. If your release pipeline depends on Webhooks, then you're unable to release.
The obvious one is git operations: if you don't have git ops then basically everything is down.
So; you're right about Copilot, but the subset you proposed (Git+Webhooks+API+Issues+PRs) has the exact same intersection problem. If git is at one nine, that entire subset is capped at one nine too, no matter how green the rest of it looks.
And to be clear: git operations is sitting at 98.98% on the reconstructed dashboard linked above[1]. That is one nine. Github stopped publishing aggregate numbers on their own status page, which.. tells you something.
also I never had considered that breaking your up-time into a bunch of different components is just a strategy to make your SRE look better than it actually is. The combined up-time tells the real story (88%!). Thanks for the link
The number of nines assigned to a suite of services is not indicative of the quality of SRE at any given company, but rather a reflection of the tradeoffs a business has decided to make. Guaranteed there's a dashboard somewhere at Github looking at platform stickiness vs. reliability and deciding how hard to let teams push on various initiatives.
ya i was just doing the math on their chart for the git operations. I added up 14.93 hours combined hours, which puts them WAY lower than the reported 99.7 metric they show right next to it.
So based on their own reporting, the uptime number should be 99.31. Which means only like 6 additional hours and they'd fall below 99.0%
I wondered. We'd seen for most of today that Actions were slow to trigger, I had at least one that was just missed, it felt like something was definitely off but the status was green all day until this.
I definitely have better uptime hosting my own gitea instance. It's faster too. It's basically a knock off GitHub. Plus with privacy concerns, I'm just happier overall. Easy setup, all I did was deploy the helm chart.
Just cancelled my GitHub Copilot Pro+ year subscription. Removal of Opus 4.6 stung, but the repeated continued downtime makes it unusable for me. Very disappointed.
No fuss instant refund of my unused subscription (£160) appreciated.
I paid 390 USD for a year Pro+ subscription in November 2025.
I used all the 'Premium Requests' every month on (mainly) Opus 4.5 & 4.6. From what I've read on here it seems I was probably a rather unprofitable customer - it felt like a steal.
Yes, it was definitely a good value for devs using those models. I was hoping since Github Copilot was rarely talked about compared to the Anthropic/OpenAI offerings, MS would continue to subsidize it to encourage people to move over, but maybe it just got too expensive.
Some of my jobs are completing, some are failing. Seems to be random. Kind of wish they would just fail outright, instead of running for 10 minutes and then failing.
what are the good alternatives available for github i find some alternative but as long as widely people use github i cant use other service right like i cant share my alternative to other developer and force him to use this for me. so i feel like i locked in even i want to move i can't
codeberg.org is a thing, and it's perfectly suited for open source projects. Many neovim plugins and home lab tech I use are hosted on codeberg with no issues. If you just want to github as social media, you will never be happy.
Huh? Why not? Say "My git repository is here $URL" then if they want to visit and/or clone it, they'll do that, otherwise don't, why does it matter?
Sure, if you're out after reaching the most people, gaining stars or otherwise try to attract "popularity" rather than just sharing and collaborate on code, then I'd understand what you mean. But then I'd begin with questioning your motivation first, that'll be a deeper issue than what SCM platform you use.
I am this > < close to just running Gogs or Forgejo on some Hetzner boxes, quit my job, charge people for access. Why aren't there like 10 startups doing this yet? Please? I want to give you my money. Just give me a git host that doesn't suck. (All the current ones suck)
Seems like outages are increasingly more frequent nowadays. Obviously, this is not the best state of affairs, and developers should not be limited by services. In the meantime I've been experimenting with building third spaces for people to chill while they wait for the services they are dependent on to go back up.
The first one I've built is a little ASCII hangout for Claude @ https://clawdpenguin.com but threads like this make me want to build it for Github too.
I've been on a somewhat binge to move a bunch of stuff to self-hosting at home. Yesterday I finally completed my self-hosted Forgejo instance at home, together with Linux, Windows (via VM) and macOS (via Mac Mini) runners/workers for CI/CD, so everything finally lives in-house (literally), instead of all source code + Actions being on GitHub but the infrastructure actually living locally.
This is probably the first time I felt vindicated with my self-hosting move literally the day after I finished the migration, very pleasant feeling. Usually it takes a month or two before I get here.
The idea of a homelab is appealing to me, but then I actually start building one and get tired of it quickly. When I’ve been fixing broken systems at work all day I don’t really want to have to be my own sysadmin too.
I’ve got a nice and powerful Minisforum on my desk that I bought at Christmas not even switched on.
I've tried for 15 years to have my homelab, but always get lost in the complexity after a year or so, in the past. About 3 years ago I gave NixOS a try instead for managing everything, which suddenly made everything easier (counter-intuitively perhaps) as now I can come back after months and still understand where everything is and how it works after just reading.
Setting up Forgejo + runners declaratively is probably ~100 lines in total, and doesn't matter I forget how it works, just have to spend five minutes reading to catch up after I come back in 6 months to change/fix something.
I think the trick to avoid getting tired of it is trying to just make it as simple as humanly possible. The less stuff you have, the easier it gets, at least that's intuitive :)
Yup this is what I've got up and running recently and it's been awesome.
My setup is roughly the following.
- Dell optiplex mini running Proxmox for compute. Unraid NAS for storage.
- Debian VM on the Proxmox machine running Forgejo and Komodo for container management.
- Monorepo in Forgejo for the homelab infrastructure. This lets me give Claude access to just the monorepo on my local machine to help me build stuff out, without needing to give it direct access to any of my actual servers.
- Claude helps me build out deployment pipeline for VMs/containers in Forgejo actions, which looks like:
- I've got separate VMs for Since storage is external with NFS shares, I can tear down and rebuild the VMs whenever I need to redeploy something.All of my docker compose files and nix configs live in the monorepo on Forgejo, so I can use Renovate to keep everything up to date.
Plan files, kanban board, and general documentation live adjacent to Nix and Docker configs in the monorepo, so Claude has all the context it needs to get things done.
I did this because I got tired of using Docker templates on Unraid. They were a great way to get started, but it's hard to pin container versions and still keep them up-to-date (Unraid relies heavily on the `latest` tag). Moving stuff over to this setup bit-by-bit and I've been really enjoying it so far.
Unless you actually need hardware (local LLM host, massive data transformation jobs), it is also easy to get into the many machines trap. A single old laptop, N97, optiplex, etc sitting in a corner is actually a huge amount of computer power that will rival most cloud offerings. Single machine can do so much.
Thanks. Yeah, I've probably been overcomplicating it before. I was running Kubernetes on Talos thinking that at least it would be familiar. Such power tools for running simple workloads on a single node is inviting headaches.
My Raspberries (and OrangePi) have better availability than github, and if were to be down I'd be out of power/internet and wouldn't be able to work much anyway.
I recently did this as well and one of the things that has struck me is just how fast Actions are compared to Github!
That said, I've got Linux and macOS setup with a Mac Mini (using a Claude-generated Ansible task file), but configuring a Windows VM seemed a bit painful. You didn't happen to find anything to simplify the deployment process here, did you?
The only problem I've found with Forgejo is a lack of fine grained permissions and also the lack of an API for pulling action invocations. The actions log api endpoints are present in gitea from what I can tell.
Forgejo 15 was just released last week with repo-specific access tokens. More to come in the future.
I moved my forge to my home, outside of a little stress getting all the containers wrangled it was pretty effortless to setup Forgejo.
I do need a good backup solution though, that’s one thing I’m missing.
I self-host Forgejo for personal and indie-startup purposes, and like it well enough.
The downside with that is it misses one of the key purposes of GitHub: posturing for job-hunting/hopping. It's another performative checkbox, like memorizing Leetcode and practicing delivery for brogrammer interviews.
If you don't appear active on GitHub specifically (not even Codeberg, GitLab, nor something else), you're going to get dismissed from a lot of job applications, with "do you even lift, bro" style dissing, from people who have very simple conceptions of what software engineers do, and why.
There is a fairly straightforward feature in Forgejo to sync your repos to Github, if that's what you want to do. It's not perfect, of course, but should help to advertise your projects and keep your activity heatmap green.
I mostly use Forgejo for my private repos, which are free at Github, but with many limitations. One month I burned all my private CI tokens on the 1st due to a hung Mac runner. Love not having to worry about this now!
> If you don't appear active on GitHub specifically... you're going to get dismissed from a lot of job applications
Sometimes wonder if my coursemates back in the days, who automated commits to private repos just to keep the green box packed, actually got any mileage out of it.
I get that. To counter it I usually try to have at least one public repo on my Forgejo instance and link to that on my resume/LinkedIn. It helps that I'm angling for security/infra positions so the self-hosting aspect actually helps but even without that I would imagine it signals something. Maybe not ideal for the most mainstream jobs (whatever that even means...), but I suspect some people will be intrigued by the initiative.
Edit: to the "do you even lift bro", the response becomes "yeah man, I've built my own gym - oh, you go to Planet Fitness? Good luck."
Self hosting was the correct solution.
6 years early [0] and you have better uptime than GitHub.
[0] https://news.ycombinator.com/item?id=22867803
Instability aside I found several things about GitHub awkward, annoying, or missing features so I spent a month building my own. I think we're going to be seeing a lot more of this.
Interesting. I speculated not long ago that Microsoft is really taking a dive here, and other companies may look to provide better alternatives to GitHub, as one idea. Today I read your comment about self-hosting here; while that is not quite what I compared or had in mind, it is interesting to read about it, of people who go that route. Microsoft is really putting themselves into trouble in the last year or two. Some things no longer work, so much is clear here.
https://mrshu.github.io/github-statuses/ says they are down to 88.15% uptime. Even when you consider uptime of individual components, their best is 99.78%, so two nines.
It would be wild if they dropped below the "two 9's" metric. I think they would need an additional ~16hr of outage in the 90 day rolling period.
https://mrshu.github.io/github-statuses/ suggests that their combined uptime doesn't even meet 1 nine, let alone 2.
The intersection of uptime across every possible service they offer isn't a particularly great metric. I get the point that they are doing badly, but it makes it look worse than I think it really is.
What I would like to see is a combined uptime for "code services", basically Git+Webhooks+API+Issues+PRs, which corresponds to a set of user workflows that really should be their bread & butter, without highlighting things you might not care about (Codespaces, Copilot).
Depends how integrated those features are.
A service's availability is capped by its critical dependencies; this is textbook SRE stuff (see Treynor et al., The Calculus of Service Availability). Copilot may well be on the side of it (and has the worst uptime, dragging everything down), but if Actions depends on Packages then Actions can be "up" while in reality the service is not functional. If your release pipeline depends on Webhooks, then you're unable to release.
The obvious one is git operations: if you don't have git ops then basically everything is down.
So; you're right about Copilot, but the subset you proposed (Git+Webhooks+API+Issues+PRs) has the exact same intersection problem. If git is at one nine, that entire subset is capped at one nine too, no matter how green the rest of it looks.
And to be clear: git operations is sitting at 98.98% on the reconstructed dashboard linked above[1]. That is one nine. Github stopped publishing aggregate numbers on their own status page, which.. tells you something.
[1]: https://mrshu.github.io/github-statuses/
also I never had considered that breaking your up-time into a bunch of different components is just a strategy to make your SRE look better than it actually is. The combined up-time tells the real story (88%!). Thanks for the link
The number of nines assigned to a suite of services is not indicative of the quality of SRE at any given company, but rather a reflection of the tradeoffs a business has decided to make. Guaranteed there's a dashboard somewhere at Github looking at platform stickiness vs. reliability and deciding how hard to let teams push on various initiatives.
ya i was just doing the math on their chart for the git operations. I added up 14.93 hours combined hours, which puts them WAY lower than the reported 99.7 metric they show right next to it.
So based on their own reporting, the uptime number should be 99.31. Which means only like 6 additional hours and they'd fall below 99.0%
Well I suppose they are finding out if you lay off too many people the IP of how the system works goes out the door with them.
Don't worry, status page says that it's 100% working - green color, all good. even though i can't access a static page
I wondered. We'd seen for most of today that Actions were slow to trigger, I had at least one that was just missed, it felt like something was definitely off but the status was green all day until this.
I definitely have better uptime hosting my own gitea instance. It's faster too. It's basically a knock off GitHub. Plus with privacy concerns, I'm just happier overall. Easy setup, all I did was deploy the helm chart.
Just cancelled my GitHub Copilot Pro+ year subscription. Removal of Opus 4.6 stung, but the repeated continued downtime makes it unusable for me. Very disappointed.
No fuss instant refund of my unused subscription (£160) appreciated.
What will you use now?
Claude Code
Doesn’t GitHub Copilot Pro+ only have month-to-month payment option?
Only Pro (without plus) can be paid annually for some reason.
Pro+ does have a annual plan but recently they paused or dropped the annual plans because they are trying to adjust the pricing model.
I paid 390 USD for a year Pro+ subscription in November 2025.
I used all the 'Premium Requests' every month on (mainly) Opus 4.5 & 4.6. From what I've read on here it seems I was probably a rather unprofitable customer - it felt like a steal.
Yes, it was definitely a good value for devs using those models. I was hoping since Github Copilot was rarely talked about compared to the Anthropic/OpenAI offerings, MS would continue to subsidize it to encourage people to move over, but maybe it just got too expensive.
Some of my jobs are completing, some are failing. Seems to be random. Kind of wish they would just fail outright, instead of running for 10 minutes and then failing.
At this point it'll be better to have alerts for when GitHub is online, rather than offline.
what are the good alternatives available for github i find some alternative but as long as widely people use github i cant use other service right like i cant share my alternative to other developer and force him to use this for me. so i feel like i locked in even i want to move i can't
codeberg.org is a thing, and it's perfectly suited for open source projects. Many neovim plugins and home lab tech I use are hosted on codeberg with no issues. If you just want to github as social media, you will never be happy.
gitlab is about as close as you'll get
Huh? Why not? Say "My git repository is here $URL" then if they want to visit and/or clone it, they'll do that, otherwise don't, why does it matter?
Sure, if you're out after reaching the most people, gaining stars or otherwise try to attract "popularity" rather than just sharing and collaborate on code, then I'd understand what you mean. But then I'd begin with questioning your motivation first, that'll be a deeper issue than what SCM platform you use.
If the day ends in Y…
I moved to Gitlab a while ago. It's a whole new level of freedom not having to pay for self-hosted CI runners.
Azure webapp deploys are also trash right now. Microsoft needs to stop slathering h1b copilot slop and get basic things like Windows patches working.
Anyone also seeing Active Directory/Entra issues?
even vercel also have more downtime nowadays
Seems like they just can’t deal with the absolute deluge of AI vomit being uploaded every day.
Good riddance I hope it completely destroys them.
Are you taking about what they write to run the service? Because looking at the uptime, and considering it's Microslop, I wouldn't be surprised.
What they write and the extra demand from vibe coders.
At this point it should almost be news when it works.
Multple 9s
Business as usual.
I am this > < close to just running Gogs or Forgejo on some Hetzner boxes, quit my job, charge people for access. Why aren't there like 10 startups doing this yet? Please? I want to give you my money. Just give me a git host that doesn't suck. (All the current ones suck)
Microsoft again.
I think it is time that Microsoft lets go of GitHub. They are handling it too poorly.
I mean; this is the normal mode of operation for GitHub at this point.
0 nines.
9 nines found somewhere after the decimal point if you measure with enough precision
Microslop is destroying Github
Seems like outages are increasingly more frequent nowadays. Obviously, this is not the best state of affairs, and developers should not be limited by services. In the meantime I've been experimenting with building third spaces for people to chill while they wait for the services they are dependent on to go back up.
The first one I've built is a little ASCII hangout for Claude @ https://clawdpenguin.com but threads like this make me want to build it for Github too.