Discover connections that traditional vector databases miss. RudraDB combines auto-intelligence and multi-hop discovery in one revolutionary package. Released free version RudraDB-Opin is perfect for learning, prototyping, and small projects.
pip install rudradb-opin
See the relationship-aware revolution in action
Discussing complete feature and capabilities podcast powered by NotebookLM!
What makes RudraDB-Opin the only truly intelligent vector database that thinks for itself
While traditional vector databases require complex manual setup and configuration, RudraDB-Opin automatically detects, analyzes, and optimizes everything for you. Experience the future of AI where the database thinks alongside your applications.
Works with any ML model instantly. OpenAI (1536D), Sentence Transformers (384D), HuggingFace (768D) - all automatically detected with zero configuration.
db = rudradb.RudraDB() # 🎯 Auto-detects ANY dimension!
# OpenAI Ada-002 (1536D) → Auto-detected ✓
db.add_vector("doc1", openai_embedding)
print(f"Detected: {db.dimension()}D") # 1536
# Switch to Sentence Transformers (384D) → New detection ✓
db2 = rudradb.RudraDB()
db2.add_vector("doc2", sentence_transformer_emb)
print(f"Detected: {db2.dimension()}D") # 384
# 🚀 Traditional databases would throw errors!
Builds intelligent connections automatically. Analyzes content, metadata, and context to create meaningful semantic, hierarchical, temporal, causal, and associative relationships.
# Just add documents with rich metadata
db.add_vector("ai_intro", embedding, {
"category": "AI",
"difficulty": "beginner",
"tags": ["intro", "basics"]
})
db.add_vector("ml_advanced", embedding, {
"category": "AI",
"difficulty": "advanced",
"tags": ["ml", "complex"]
})
# 🧠 Automatically creates:
# - Semantic relationship (same category)
# - Temporal relationship (beginner → advanced)
# - Associative relationship (shared tags)
print(f"Auto-relationships: {db.relationship_count()}")
Discovers indirect connections through relationship chains. Finds documents 2+ hops away that traditional databases completely miss through intelligent traversal.
# Traditional search: only similar documents
basic_results = traditional_db.search(query) # 3 results
# 🔍 RudraDB-Opin auto-enhanced search
enhanced_results = db.search(query, SearchParams(
include_relationships=True, # 🧠 Auto-detected relationships
max_hops=2 # Multi-hop traversal
)) # 7 results!
# Discovers: A → (semantic) → B → (causal) → C
for result in enhanced_results:
connection = "Direct" if result.hop_count == 0 else f"{result.hop_count}-hop auto-connection"
print(f"{result.vector_id}: {connection}")
# 🚀 133% more relevant results through auto-intelligence!
Self-tuning system that automatically optimizes search performance, memory usage, and relationship scoring based on your usage patterns. No manual tuning required.
# ⚡ Auto-optimization in action
params = SearchParams(
auto_enhance=True, # Enable all auto-optimizations
auto_balance_weights=True, # Auto-balance similarity vs relationships
auto_select_relationship_types=True, # Auto-choose relevant types
auto_optimize_hops=True, # Auto-optimize traversal depth
auto_calibrate_threshold=True # Auto-adjust similarity threshold
)
results = db.search(query, params)
# 📊 Check what auto-optimizations were applied
stats = db.get_last_search_enhancement_stats()
print(f"Auto-optimizations applied:")
print(f" Weight balanced: {stats['weight_balanced']}")
print(f" Performance gain: {stats['performance_gain']:.1%}")
# 🚀 System learns and optimizes automatically!
RudraDB-Opin doesn't just store vectors - it thinks, learns, and optimizes alongside your applications. Experience the future where databases have intelligence built-in.
Experience the difference that relationship-awareness makes
Content similarity and topical connections
Parent-child and category structures
Sequential and time-based relationships
Cause-effect and problem-solution pairs
General associations and recommendations
Zero configuration, maximum intelligence
pip install rudradb-opin
import rudradb
import numpy as np
# Auto-detects dimensions!
db = rudradb.RudraDB()
# Add vectors with any embedding model
embedding = np.random.rand(384).astype(np.float32)
db.add_vector("doc1", embedding, {"title": "AI Concepts"})
# Relationship-aware search
results = db.search(query_embedding, rudradb.SearchParams(
top_k=5,
include_relationships=True, # 🔥 The magic!
max_hops=2
))
print(f"Found {len(results)} intelligent results!")
Auto-detects 1536D embeddings
Any transformer model supported
384D, 768D auto-detected
Seamless integration
Watch how relationship-aware search discovers connections others miss
133% more relevant results discovered through intelligent relationships
Multi-hop discovery finds learning prerequisites and advanced topics
Context understanding beyond simple text similarity
Real-world applications that benefit from relationship intelligence
Build learning paths that understand prerequisites and progressions automatically
Discover citation networks and methodological connections automatically
Enhance retrieval with context-aware relationship understanding
Build recommendation systems that understand user journeys and product relationships
Accelerate drug discovery with relationship-aware molecular and research connections
Build intelligent healthcare systems that understand patient data and treatment relationships
Be part of the community building the future of AI
RudraDB-Opin is perfect for learning and prototyping. When you're ready to scale, upgrade seamlessly to full RudraDB with 1M+ vectors and 2M+ relationships.
# 1. Export your data (preserves everything)
data = db.export_data()
# 2. Upgrade package
pip uninstall rudradb-opin
pip install rudradb
# 3. Import to production scale
new_db = rudradb.RudraDB()
new_db.import_data(data) # Same API, 1M+ Vectors & Relationships capacity!