Hacker Newsnew | past | comments | ask | show | jobs | submit | woile's commentslogin

For the guys at Netbird, please create an entry in the https://wiki.nixos.org explaining how to use it with nixos.

- Tailscale has one entry - Pangolin is getting one

I would like to see, even if brief:

1. Getting started

2. Hardware requirements

3. Security considerations

4. Recommended architecture, like running in a VPS if it makes sense

5. Configuring a server

6. Configuring devices

7. Resources (links to read more on netbird)

Thank you from the home lab community


Anyone can contribute to the nixos wiki, why don't you get the page started?


Because I don't feel confident. I'm super green at VPN's and this kind of networks. I don't want to give the wrong advice.

I'm editing what I can, but you don't want to take my advice, it would be better if someone who knows does it.

The fact that I'm into home lab, doesn't mean I know specifically how to do this. And I'm just saying, when I go to the wiki, to pick up on one of the options, they are missing.

I don't know why the hostility for asking to add some docs


This is kind of a weird request, IMO.

If you're a homelab NixOS user, isn't it on you to try to answer these questions? A home lab is for learning, and if you don't want to do that, what's the point?


Very nice tool!

The UI has a few errors on desktop, I cannot see all the issues. The leaderboard... doesn't work ? and the topbar hides some elements

browser: firefox


Hey, ollama run as suggested in hf doesn't seem to work with this model. This worked instead:

ollama pull hf.co/sweepai/sweep-next-edit-1.5B


I've been using it with the Zed editor and it works quite well! Congrats.

This kind of AI are the ones I like and I'm looking to run in my workstation.


Double-check if you're using the right format.

Example here: https://huggingface.co/sweepai/sweep-next-edit-1.5B/blob/mai...


Could you give the gist / config on how you made it work with Zed ?


I had Claude add it as an edit-prediction provider (running locally on llama.cpp on my Macbook Pro). It's been working well so far (including next-edit prediction!), though it could use more testing and tuning. If you want to try it out you can build my branch: https://github.com/ihales/zed/tree/sweep-local-edit-predicti...

If you have llama.cpp installed, you can start the model with `llama-server -hf sweepai/sweep-next-edit-1.5B --port 11434`

Add the following to your settings.json:

```

  "features": {
    "edit_prediction_provider": { "experimental": "sweep-local" },
  },
  "edit_predictions": {
    "sweep_local": {
      "api_url": "http://localhost:11434/v1/completions",
    },
  }
```

Other settings you can add in `edit_predictions.sweep_local` include:

- `model` - defaults to "sweepai/sweep-next-edit-1.5B"

- `max_tokens` - defaults to 2048

- `max_editable_tokens` - defaults to 600

- `max_context_tokens` - defaults to 1200

I haven't had time to dive into Zed edit predictions and do a thorough review of Claude's code (it's not much, but my rust is... rusty, and I'm short on free time right now), and there hasn't been much discussion of the feature, so I don't feel comfortable submitting a PR yet, but if someone else wants to take it from here, feel free!


This is great and similar to what I was thinking of doing at some point. I just wasn't sure if it needed to be specific to Sweep Local or if it could be a generic llama.cpp provider.


I was thinking about this too. Zed officially supports self-hosting Zeta, and so one option would be to create a proxy that uses the Zeta wire format, but is packed by llama.cpp (or any model backend). In the proxy you could configure prompts, context, templates, etc., while still using a production build of Zed. I'll give it a shot if I have time.


This is it:

{

    "agent": {

        "inline_assistant_model": {

            "model": "hf.co/sweepai/sweep-next-edit-1.5B:latest",

            "provider": "ollama",

        },

    }

}


+1, I wasn't able to make it work on zed either and It would really help if woile can tell how they made it work on their workstation.


Edit: I asked chatgpt and just tinkered around till I found a setting which could work

{ "agent": { "default_model": { "model": "hf.co/sweepai/sweep-next-edit-1.5B:latest" } }, "inline_completion": { "default_provider": { "model": "hf.co/sweepai/sweep-next-edit-1.5B" } }, "chat_panel": { "default_provider": { "model": "hf.co/sweepai/sweep-next-edit-1.5B" } } }

Then go on the down bottom AI button or that gemini like logo and then select sweep model. And also you are expected to run ollama run command and ollama serve it

ollama pull hf.co/sweepai/sweep-next-edit-1.5B ollama run hf.co/sweepai/sweep-next-edit-1.5B

I did ask Chatgpt some parts about it tho and had to add this setting into my other settings too so ymmw but Its working for me

It's an interesting model for sure but I am unable to get tab auto_completion/inline in zed, I can ask it in summary and agentic mode of sorts and have a button at top which can generate code in file itself (which I found to be what I preferred in all this)

But I asked it to generate a simple hello world on localhost:8080 in golang and in the end it was able to but it took me like 10 minutes. But some other things like simple hello world was one shot for the most part

It's definitely an interesting model that's for sure. We need stronger model like these I can't imagine how strong it might be at 7B or 8B as iirc someone mentioned that this i think already has it or similar.

A lot of new developments are happening in here to make things smaller and I am all for it man!


You sure this works? inline_completion and chat_panel give me "Property inline_completion is not allowed." - not sure if this works regardless?


I really don't know, I had asked chatgpt to create it and earlier it did give me a wrong one & I had to try out a lot of things and how it worked on my mac

I then pasted that whole convo into aistudio gemini flash to then summarize & give you the correct settings as my settings included some servers and their ip's by the zed remote feature too

Sorry that it didn't work. I um again asked from my working configuration to chatgpt and here's what I get (this may also not work or something so ymmv)

{ "agent": { "default_model": { "provider": "ollama", "model": "hf.co/sweepai/sweep-next-edit-1.5B:latest" }, "model_parameters": [] },

  "ui_font_size": 16,
  "buffer_font_size": 15,

  "theme": {
    "mode": "system",
    "light": "One Light",
    "dark": "One Dark"
  },

  // --- OLLAMA / SWEEP CONFIG ---
  "openai": {
    "api_url": "http://localhost:11434/v1",
    "low_latency_mode": true
  },

  //  TAB AUTOCOMPLETE (THIS IS THE IMPORTANT PART)
  "inline_completion": {
    "default_provider": {
      "name": "openai",
      "model": "hf.co/sweepai/sweep-next-edit-1.5B"
    }
  },

  //  CHAT SIDEBAR
  "chat_panel": {
    "default_provider": {
      "name": "openai",
      "model": "hf.co/sweepai/sweep-next-edit-1.5B"
    }
  }
}


We'll push to Ollama



Last year I got a laptop with Linux, after a Mac gap of 6 years (work) and it's been super smooth with NixOS and KDE.

My main issue now was the 16GB of RAM using a VM and working on rust, which would kill the system, but now I have more, so all the issues are gone.

One of the machines has become a media-center, with a remote keyboard, anyone at home can operate now.

Multiple screens, bluetooth, drag and drop, night/light all seems to be working


nah, a lot of people working on `uv` have a massive amount of experience working on the rust ecosystem, including `cargo` the rust package manager. `uv` is even advertised as `cargo` for python. And what is `cargo`? a FLOSS project.

Lots of lessons from other FLOSS package managers helped `cargo` become great, and then this knowledge helped shape `uv`.


I'm paying for youtube music, but on the side I started buying records in bandcamp directly from artists and putting them in my jellyfin library. I do use lidarr for some older tracks. I think the ecosystem is starting to look good enough, where you can have your own personal spotify.


codeberg.org for open source, because it's a non-profit, with what it seems, very well intentioned people, with a good governance structure, and it's starting to support federation.

For a company, I'd recommend self-hosting forgejo (which also has actions), which powers codeberg.

(forgejo started as a fork of gitea)


I bought a roomba because I associated it with quality. It's crap! I bought a nice mopping model. The cheap one I had before was even better with a simple only-turn-left algorithm. I'm not surprised by this.

Reading the comments, I'm glad the industry is way ahead, and I was just confused. I think I'm gonna sell and get a better one.


We actually have "juridical person" in most countries. I think AI would be ideal for that


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: