Original: Swyx · 14/02/2026
Summary
You Should Build Slack. We’re still not over the Sam Altman town hall; at the town hall he said “tell us what we should build, we’ll probably build it!” and todKey Insights
“You Should Build Slack.” — Swyx’s direct suggestion to OpenAI, highlighting the strategic fit.
“Slack has been on a slow rachet up in prices and has struggled to introduce compelling new AI features.” — Critique of Slack’s current trajectory and AI feature set.
“OpenAI Slack” is your chance to retake the initiative.” — Swyx sees an opportunity for OpenAI to innovate in the workspace communication space.
Topics
Full Article
Published: 2026-02-14
Source: https://www.latent.space/p/ainews-why-openai-should-build-slack
We’re still not over the Sam Altman town hall; at the town hall he said “tell us what we should build, we’ll probably build it!” and today at Stanford Treehacks he said another thing about how he chooses projects: he thinks of himself as having made a career out of doing things people think are hard, but would be a big deal if it came true. well okay, Sam: You Should Build Slack. It fits your criteria: it is hard for anyone else without the clout of OpenAI to pull off, it will be very well received by the tech community, and it is an obvious progression of ChatGPT for both your Enterprise and your Coding push and build permanent entrenchment in your customers. Slack rejected developer community and went upmarket in 2019, then Salesforce bought it for $27.7B in 2021, and ever since then Slack has been on a slow rachet up in prices and has struggled to introduce compelling new AI features (Slack AI is occasionally useful but impossible to discover/learn/personalize) while facing constant outages. NPS feels low, and yet every organization in tech uses it.
Feb 14 update: This header is usually at the start of the post, but since it is causing some confusion on HN and Twitter, I am moving it down. The editorial written above is always human written, the recaps below are human reviewed.
AI News for 2/12/2026-2/13/2026. We checked 12 subreddits, 544 Twitters and 24 Discords (256 channels, and 7993 messages) for you. Estimated reading time saved (at 200wpm): 675 minutes. AINews’ website lets you search all past issues. As a reminder, AINews is now a section of Latent Space. You can opt in/out of email frequencies!It’s a pretty quiet day — the new Dwarkesh-Dario pod is worthwhile but hasn’t generated much new conversation on day 1, and OpenAI claimed a big result in theoretical physics that is mostly getting questioned by some physicists. This means we get to go back to our backlog of mini-editorial ideas for AINews subscribers!
Top tweets (by engagement)
/r/LocalLlama + /r/localLLM Recap
1. MiniMax-M2.5 Model Announcements and Details
- A user expressed surprise at the model’s size, noting that while they expected an increase to 800 billion parameters to compete with models like GLM5, the MiniMax-M2.5 remains at 220 billion parameters. This is considered impressive given its ‘frontier strength’, suggesting high performance despite the parameter count.
- Another user mentioned the model’s Q4_K_XL size, which is approximately 130GB. This size is significant as it places the model just beyond the capabilities of some hardware, indicating a need for more robust systems to fully utilize the model’s potential.
- There is anticipation for the release of FP4/AWQ, indicating that users are looking forward to further advancements or optimizations in the model’s performance or efficiency. This suggests a community eager for improvements that could enhance usability or reduce resource requirements.
- The MiniMax-M2.5 model by Moonshot is notable for its architecture, which utilizes 230 billion parameters but only activates 10 billion at a time. This design choice is likely aimed at optimizing computational efficiency, allowing the model to perform well on less powerful hardware, such as GPUs that are not top-of-the-line. This approach could potentially offer a balance between performance and resource usage, making it accessible for more users.
- A comparison is drawn between MiniMax-M2.5 and other large models like GLM and Kimi. GLM has had to double its parameters to maintain performance, while Kimi has reached 1 trillion parameters. The implication is that MiniMax-M2.5 achieves competitive performance with fewer active parameters, which could be a significant advancement in model efficiency and scalability.
- The potential for further optimization through quantization is highlighted, suggesting that MiniMax-M2.5 could be made even more efficient. Quantization could reduce the model’s size and increase its speed, making it feasible to run on machines with 128GB of RAM while still leaving room for additional tasks such as deep-context tool use. This could make the model particularly attractive for users with limited computational resources.
- The Minimax M2.5 model is highlighted for its cost-effectiveness, with operational costs significantly lower than competitors like Opus, Gemini 3 Pro, and GPT-5. Specifically, running M2.5 at 100 tokens per second costs 0.3 per hour. This translates to an annual cost of $10,000 for four instances running continuously, making it a potentially disruptive option in terms of affordability.
- There is anticipation for the release of open weights on Hugging Face, which would allow for broader experimentation and integration into various applications. This suggests a community interest in transparency and accessibility for further development and benchmarking.
- The potential impact of Minimax M2.5 on existing models like GLM 5.0 and Kimi 2.5 is discussed, with some users suggesting that if the reported benchmarks are accurate, M2.5 could surpass these models in popularity due to its ease of use and cost advantages. This indicates a shift in preference towards models that offer better performance-to-cost ratios.
Key Takeaways
Notable Quotes
You Should Build Slack.Context: Swyx’s direct suggestion to OpenAI, highlighting the strategic fit.
Slack has been on a slow rachet up in prices and has struggled to introduce compelling new AI features.Context: Critique of Slack’s current trajectory and AI feature set.
OpenAI Slack” is your chance to retake the initiative.Context: Swyx sees an opportunity for OpenAI to innovate in the workspace communication space.
Related Topics
- [[topics/ai-agents]]
- [[topics/chatgpt]]
- [[topics/openai-api]]
Related Articles
[AINews] OpenAI and Anthropic go to war: Claude Opus 4.6 vs GPT 5.3 Codex
Swyx · explanation · 89% similar
[AINews] SpaceXai Grok Imagine API - the #1 Video Model, Best Pricing and Latency
Swyx · reference · 84% similar
[AINews] "Sci-Fi with a touch of Madness"
Swyx · explanation · 84% similar
Originally published at https://www.latent.space/p/ainews-why-openai-should-build-slack.