pouet.chapril.org est l'un des nombreux serveurs Mastodon indépendants que vous pouvez utiliser pour participer au fédiverse.
Chapril https://www.chapril.org est un projet de l'April https://www.april.org

Administré par :

Statistiques du serveur :

1K
comptes actifs

#orgmode

29 messages27 participants5 messages aujourd’hui

is there any good way to use notion from #orgmode or do i have to live with it being maddeningly slow? ideally i'd like to create tasks and subtasks from within notion, and be able to push and pull changes. this should be granular enough to work per section (including subsections). #emacs

Ladies & gentlemen: here is Denote v4!
"This is a massive release." writes Prot on his web site about version 4.0.0 on 2025-04-15 of #Denote software.
And Denote... "aims to be a simple-to-use, focused-in-scope, and effective note-taking and file-naming tool for Emacs."
So after publishing a book about Lisp on 2025-04-12, here is the new version of Denote!
protesilaos.com/codelog/2025-0
And Denote, like Emacs, uses text #format like #orgmode, YAML, txt and many others.
Happy Emacsing!

Protesilaos Stavrou · Emacs: Denote version 4.0.0Information about the latest version of my Denote package for GNU Emacs.

Hello #emacs crowd.
I have found this handy blog post github.com/howardabrams/hamacs to search for files containing certain tags in #OrgMode.

But somehow i get the message `Symbol's function definition is void: reduce`.
Should reduce not be included in some "core" package or does any of you know where this could originate from? Did not find anything in my `package-show-package-list`.

As you see, i am not deep in #elisp and most copy and paste my init file 😇

My personal VIM-like configuration of Emacs inspired by Doom and Spacemacs. - howardabrams/hamacs
GitHubhamacs/elisp/org-find-file-tags.el at main · howardabrams/hamacsMy personal VIM-like configuration of Emacs inspired by Doom and Spacemacs. - howardabrams/hamacs

Update:
- Fixed up a number of functions. Abstracted some, improved descriptions for better comprehension by LLMs.
- Tried to disable org-ql caching, but it's tricky so caching is still on.
- Fixed up list-buffers and open-file tools (on
@karthink@fosstodon.org's advice)
- Disabled a few tools (notably org-ql-select) by default (now you'll have to manually add it to your config), as getting correct query syntax out of an LLM is a losing gamble.
- Updated some documentation.
- Changed some function names.
- Playing around with adding some additional markers for LLMs to use as reference (e.g. to treat pieces of data they get as actually separate.)

It appears to be working, if not better, then at least more reliably now. Some prompting is necessary to avoid it falling back to bad habits (using "X OR Y" in queries, for instance.

Note: reasoning/ thinking interferes very badly with calling tools. I'm not sure why that is, but at the moment I recommend against it.

So far, it works reasonably well for general queries, but I still find that org-ql-select-agenda-rifle (which does what it sounds like) pulls
way too much into context.

I want to develop a way in which queries can be more precise without going keyword by keyword, but also without letting the LLM screw up the syntax.

https://git.bajsicki.com/phil/gptel-org-tools

#emacs #gptel #orgmode

Carte résumé du dépôt phil/gptel-org-tools
Phil's Gitgptel-org-toolsTooling for LLM interactions with org-mode. Requires gptel and org-ql.
A répondu dans un fil de discussion

@khinsen

Thanks for your work on a tooling for "the art of doing science", and for going to great length for making it accessible to others. I'll give it a try!

A humble suggestion: since you're writing about your work on a social media without any automated relevance rating and with limited search features, it's advisable to use a salient picture, and hashtags, e.g., the ones below.

So I can't find anything for this, #emacs #orgmode is there any way to limit how many headings org-global-cycle displays? New blog format, and it's great, but marking every post with an archive tag makes all of the posts archive, and there are some posts with a lot of subheadings that get in the way of scrolling

And just a day later, here's a git repo with more philosophy than good code.

I think the philosophy part is more important personally, the code we can fix later.

https://git.bajsicki.com/phil/gptel-org-tools

TL;DR: Emacs (and its ecosystem) makes the whole vectorization/ RAG/ training stuff entirely redundant for this application.

Yes, this code fails still... a bunch, especially given the current lack of guardrails. But the improvement I've seen in the last few days makes me cautiously believe that with enough safeguards and a motivating enough system prompt, an active assistant may be possible.

Image attached isn't
nearly as good as I've seen, but it's an off-hand example that demonstrates it working.

The only thing missing for me to be happy with this is one of those organic, grass-fed LLM models whose existence isn't predicated on theft.

#emacs #orgmode #gptel #orgql

RE:
https://fed.bajsicki.com/notes/a6jw3n155z

So... I strongly disagree with LLMs (mostly with the marketing and the training data issue), but I found a use-case for myself that they may actually be 'alright' at.

I organize my life in
#Emacs #orgmode It's great.

But over the years, my notes and journals and everything have become so large, that I don't really have a grasp of all the bits and pieces that I have logged.

So I started using org-ql recently, which works great for a lot of cases, but not all.

Naturally, I wanted more consolidation between the results, and better filtering, as well as a more general, broad view of the topics I wanted to look up in my notes.

So I started writing some tooling for
#gptel, to allow LLMs to call tools within Emacs, and leverage existing packages to do just that.

It's in its inception, and works only 20-25% of the time (because the LLM needs to write the queries in the first place), but it works reasonably well even with smaller models (Mistral Small 24B seems to do alright with 16k context, using llama.cpp).

In general:
- It kinda works, when it wants to.
- The main failure point at the moment is that the LLM isn't able to consistently produce proper syntax for org-ql queries.
- The context window sucks, because I have years of journals and some queries unexpectedly explode, leading to the model going stupid.

So far it's been able to:
- retrieve journal entries
- summarize them
- provide insights on habits (e.g. exercise, sleep quality, eating times)
- track specific people across my journal and summarize interactions, sentiment, important events

It doesn't sound like a lot, but these are things which would take me more time to do in the next year than I already spent on setting this up.

And I don't need to do anything to my existing notes. It just reads from them as they are, no RAG, no preprocessing, no fuss.

At the same time, this is only part of my plan. Next:

1. Add proper org-agenda searches (such that the LLM can access information about tasks done/ planned)
2. Add e-mail access (via mu4e, so it can find all my emails from people/ businesses and add them as context to my questions)
3. Add org-roam searches (to add more specific context to questions - currently I'm basing this entire project around my journal, which isn't ideal)
4. Build tooling for updating information about people in my people.org file (currently I do this manually and while there's a bunch of stuff, I would
love if it was more up to date with my journal, as an additional context resource)

For now, this is
neat, and I think there's potential in this as a private, self-hosted personal assistant. Not ideal, not smart by any means (god it's really really not smart), but with sufficient guardrails, it can speed some of my daily/ weekly tasks up. Considerably.

So yeah. I'm
actually pretty happy with this, so far.

PS.
#orgql because org-ql doesn't show as an existing tag.