Thanks so much! Really glad you had fun with it and shared with friends. Dropping an update tonight - should make the app even better. Merry Christmas!
The Python interpreter core loop sounds like the perfect problem for AlphaEvolve. Or it's open source equivalent OpenEvolve if DeepMind doesn't want to speed up Python for the competition.
I built this because I wanted a simple tool to overlay grids on reference photos for drawing, but most top search results were either ad-ridden or required uploading images to a server.
GridMakers is different:
Privacy-First: It uses HTML5 Canvas to process everything locally in your browser. No image data is sent to my server.
Specialized Modes: I added presets for Portrait (A4 crop), Mural (10x10 scaling), etc.
Tech Stack: Built with Next.js and Tailwind.
It's free and I'm just trying to make a useful utility. Feedback welcome! Link: https://gridmakers.app
Interesting read. What stood out to me is that this feels less like a detection problem and more like a cost-shaping one.
Sec-Fetch and Client Hints aren’t decisive on their own, but they’re hard to fake consistently across layers and over time, which is where the real value seems to be.
Curious whether you see these headers as a durable signal, or more as something that will need regular rotation as automation frameworks adapt.
I'm in much the same boat - however my SWE pivot was when I attempted to commercialize my PhD research along with my advisor. We had a startup for a few years and then got acquired by a global company. I now manage a team of SWEs and coordinate with SMEs in technical fields to make sure that our scientific software products are great in both areas. These roles exist, but probably not commonly as a straight hybrid - may have to lean into one and use the other as a differentiator/value-add. For me, I think I got here by caring about the customer experience first - which takes "whatever it takes" - software and science both. I have to be an evangelist for both of these things but only as a means to a common end, which is to help the end users expand their understanding and abilities with applied knowledge.
Yeah, that’s usually what I do as well.
Breaking formulas down into smaller pieces with clear intent helps a lot.
What I’ve been thinking about lately are cases where large formulas already exist,
and changing the sheet structure (adding helper columns or moving things around)
isn’t always practical.
In those situations, it feels useful to first understand
what the existing formula is doing structurally,
before deciding whether and how to refactor it.
I’m not convinced this is a better approach yet —
just exploring the space.
Hey HN! Author here.
Quick summary: SENTINEL is an AI security platform I've been building. Today I'm releasing everything – 97 detection engines, from regex to topological data analysis.
What makes it different:
- Strange Math engines (TDA, Sheaf Theory, Hyperbolic Geometry)
- 39K+ attack payloads for red teaming
- <10ms latency, production-ready
Why now? It's Christmas. Wanted to give back.
Happy to answer any questions!
I’ve been building RAG systems for a while, and I noticed 90% of retrieval failures aren't due to the LLM—they're due to the data. I got tired of debugging hallucinations only to find the retriever had pulled "Page 1 of 5" headers or five duplicate versions of an old policy.
I couldn't find a simple "pandas-profiling" equivalent for unstructured text, so I built this.
It runs locally (CLI) and helps you:
Detect semantic duplicates (using all-MiniLM-L6-v2) to save vector storage costs.
Flag PII (API keys, emails) before they get indexed.
Identify "coverage gaps" by comparing user queries against your docs.
It outputs a standalone HTML report you can show to stakeholders.
Written in Python, open source (MIT). Feedback welcome!
One bump in your laptop and you are shit out of luck then though. Removable parts imply that there needs to be extremely slight wiggle room (not to the level rightly criticized by the blog post author, but it cannot completely go away).
so not only did they enforce a ridiculously small message limit, they also did it for the self-hosted version, and they did it without announcing it AND without a suitable migration path
and still no one from that company has admitted to it being a mistake?
I think it's usually a bit more complicated, i.e. the people who were expected to do processes don't and someone else shows the people asking for access that there's a faster, cheaper, cooler tool.
It's a mixed bag.. If you spend much time in a poorly reviewed ecosystem then you are quickly taught that if you try to make things work in their broken crap you will lose a lot of time and have no code to demonstrate you were working at all.
If you always inject the same repair layer approach of your own crap then you ship more reliably and have your pleased client and your lock-in. The pleased client then has to decide if they should listen to a replacement who says you are insane but can't seem to ship anything.
Ok but the parent commentator invoked a contract. If there is no consent there is no contract. Simply stating that one is bound by laws isn't a justification, it's just an observation.
I think interruptions had better be the top priority. I find text LLMs rage inducing with their BS verbiage that takes multiple prompts to reduce, and they still break promises like one sentence by dropping punctuation. I can't imagine a world where I have to listen to one of these things.
Hebrew wasn’t “literally synthesised” and wasn’t dead. Jews have continuously been writing and publishing works in Hebrew for the past 2,000 years.
It has evolved naturally to some extent over that time, but much less than other languages - a modern Hebrew speaker can more easily understand medieval Hebrew than an English speaker Medieval English.
What has been synthesised a century ago is additional vocabulary for modern concepts, and this is ongoing for Hebrew as it is for every other language.
Ruby 4.0's parallel execution improvements are a game-changer for the ecosystem. The ruby::Box feature addresses one of the biggest pain points - GIL limitations - while maintaining Ruby's elegance.
What's particularly exciting is how this positions Ruby for modern workloads. With proper parallelism, Ruby apps can finally compete with Go and Node.js in concurrent scenarios without sacrificing developer happiness.
The typing improvements also can't be understated. Gradual typing strikes the right balance - it helps teams scale codebases without forcing the verbosity of Java or the complexity of TypeScript's type gymnastics.
Looking forward to seeing how the Rails ecosystem adopts these features. This could spark a Ruby renaissance in 2025.
We may [will] limit your usage in other ways [that we won't tell you about], such as weekly and monthly caps [absolutely] or model and feature usage [which means literally anything goes including stealth quantization and rerouting], at our discretion [whenever we like, teehee!!].
Wrong. I can tell you from 15 years of married experience, that it's more complicated than that. Some of the patterns that are put in place are feedback loops, and when you change your own behavior, it fixes by itself. No need to talk, just do something different. In fact, talking sometimes makes things worse, because now this background thing becomes a first class issue.
One thing I have observed over and over again, is that when I lack assertiveness, because I am tired or something, then she feels like she needs to take over. And she does it very well. But it also creates a lot of anxiety in her.
What I am saying to OP, is that it's time to be more assertive. (1) There is more at play than the couple here; you can't just watch your children getting wrecked and do nothing. And (2), taking some ownership might actually fix the underlying issue.
Fair point – that's exactly the kind of thing we built this for. Seeing multiple AIs cross-check each other in real time makes biases way more visible than asking them one by one. Cool it's working as intended.
By the way, try using '@' mentions when you want to tag specific AIs – makes it way easier to direct questions and keeps conversations cleaner. Thanks for the feedback!
This approach using Sec-Fetch-* headers is elegant, but it's worth noting the browser support considerations. According to caniuse, Sec-Fetch-Site has ~95% global coverage (missing Safari < 15.4 and older browsers).
For production systems, a layered defense works best: use Sec-Fetch-Site as primary protection for modern browsers, with SameSite cookies as fallback, and traditional CSRF tokens for legacy clients. This way you get the UX benefits of tokenless CSRF for most users while maintaining security across the board.
The OWASP CSRF cheat sheet now recommends this defense-in-depth approach. It's especially valuable for APIs where token management adds significant complexity to client implementations.
This is an excellent resource for building mathematical intuition through code. Python's combination of readable syntax and powerful libraries (NumPy, SymPy, Matplotlib) makes it ideal for exploring concepts like linear algebra, calculus, and discrete math interactively.
One approach I've found effective: start with a conjecture, visualize it with matplotlib, then prove it formally. The instant feedback loop helps develop both computational thinking and mathematical rigor. Tools like Jupyter notebooks make this workflow seamless.
For anyone interested in similar resources, "Mathematics for Machine Learning" by Deisenroth et al. and 3Blue1Brown's linear algebra series complement this beautifully by bridging theory and computation.