/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Akshay

@akushaidesu
3 posts
2025-03-23
@TXhunyuan Hunyuan-T1 compared with other models in MMLU-Pro benchmark. Checkout at https://evalarena.ai/ [image]
2025-03-23 View on X
South China Morning Post

Tencent unveils Hunyuan T1, a new reasoning AI model powered by its Hunyuan Turbo S AI model, and claims it rivals DeepSeek's R1 in both performance and pricing

Tencent Holdings has unveiled a new artificial intelligence (AI) reasoning model, Hunyuan T1, that rivals DeepSeek's R1 in both performance and pricing.

2024-12-07
7. More general graders being provided by OpenAI for different intents Custom graders of our own also possible later 8. Can configure hyper parameters for Fine Tuning, going with defaults for now 9. We can customize a frontier model for our use case using our dataset, our [image]
2024-12-07 View on X
OpenAI

OpenAI expands its Reinforcement Fine-Tuning Research Program to let developers create expert models in specific domains with very little training data

the repo we used to train Tulu 3. Expanding reinforcement learning with verifiable rewards (RLVR) to more domains and with better answer extraction (what OpenAI calls a grader, a [...

RFT - Reinforcement Finetuning #OpenAI #Day2 1. Available next year, a preview is being showed today 2. Model learns to reason in a custom domain based on data which it is being fine tuned on 3. With a few examples (a dozen), model can be an expert in that domain, as opposed to
2024-12-07 View on X
OpenAI

OpenAI expands its Reinforcement Fine-Tuning Research Program to let developers create expert models in specific domains with very little training data

the repo we used to train Tulu 3. Expanding reinforcement learning with verifiable rewards (RLVR) to more domains and with better answer extraction (what OpenAI calls a grader, a [...