One Lexiicon ownGPT AI model on Oracle Cloud Infrastructure – A Pioneering Collaboration for AI Innovation and Excellence

Since its founding in 2019, One Lexiicon has evolved from a traditional systems integrator to a boutique consultancy focused on delivering outcome-driven, client-centric technology solutions. As an Oracle partner across industries, the firm specializes in leveraging Oracle Cloud Infrastructure (OCI) to support digital transformation initiatives with speed, scale, and security.

Partnering for Industry-Focused AI Innovation

One Lexiicon’s deep understanding of business workflows—particularly in ERP environments—combined with OCI’s high-performance infrastructure has enabled the development of OwnGPT, a purpose-built AI model designed to streamline enterprise operations and decision-making.

OCI’s robust capabilities around cloud-native deployments, secure data storage, and GPU-powered compute made it the ideal platform to support the full lifecycle of OwnGPT—from training to inference—within a highly scalable, hybrid cloud setup.

Building OwnGPT on OCI: From Vision to Deployment

To support OwnGPT, One Lexiicon deployed OCI’s A10 Tensor Core GPUs, accelerating both model training and real-time inference. Integration with MySQL HeatWave boosted data processing speeds, while OCI Object Storage ensured secure handling of large-scale enterprise data. OwnGPT was trained on structured ERP data from Oracle EBS Vision, with a focus on the Accounts Payable and Receivable modules.

Optimizing AI Workloads with OCI GPU Infrastructure

OCI’s flexible GPU offerings, available across bare metal and VM instances, enabled right-sized deployments that could evolve with business needs. Oracle’s RDMA-enabled low-latency networking facilitated faster distributed training, while scalable storage solutions ensured efficient management of massive datasets.

As workloads matured, OCI’s support for A100 GPU shapes provided the computational power needed to scale. Fine-tuning during the POC phase resulted in increased precision and performance, tailored specifically to enterprise finance operations.

Hybrid Cloud Architecture Highlights

The OwnGPT architecture leverages a hybrid setup, bridging on-premises infrastructure with Oracle Cloud. Key components include:

  • Virtual Cloud Network (VCN): Hosts subnets for load balancing, applications, and compute nodes.
  • Security & Availability: Features Web Application Firewall (WAF), Load Balancer, and Database Cloud Service (DBCS).
  • Resilience: Ensured through fault domains, IAM, Service Gateway, and NAT Gateway.
  • Storage: Both block and file storage options for scalable data handling.

This architecture delivers the flexibility and robustness needed for enterprise AI applications at scale.

Fine-Tning and Scaling on OCI

As AI workloads evolved, OCI allowed us to scale seamlessly, integrating more powerful GPU shapes like A100 GPU for advanced processing. During the POC, we fine-tuned Own GPT using data from Oracle EBS Vision, with a focus on the Accounts Payable and Accounts Receivable modules, which enhanced the model’s precision and efficiency.

Solution Architecture

At the core of the architecture is a Virtual Cloud Network (VCN) that hosts subnets dedicated to load balancing, application hosting, and server operations.

OCI architecture to deploy customer ownGPT model
Figure 1. OCI Architecture

The architecture incorporates security and high availability features, including Web Application Firewall (WAF), Load Balancer, and Database Cloud Service (DBCS). Data storage is managed via both block and file storage options, ensuring scalability and reliability.

High availability is maintained through fault domains, while secure access to cloud resources is enforced using Identity and Access Management (IAM), Service Gateway, and NAT Gateway. This robust and flexible architecture supports the scalable deployment of AI-driven applications, providing enterprise-grade performance and security.

Performance Benchmarks

  • Technical Metrics
    • Response time: Average 1.2 seconds, 95th percentile 2.5 seconds, complex queries 3.7 seconds max.
    • Throughput: Peak 120 concurrent users, 500 queries per minute, 99.97% API success rate.
    • Resource utilization: CPU 42% average (78% peak), memory 4.2GB (7.1GB max), storage efficiency 12MB per company for vector data.
    • Scalability: Linear scaling up to 50 companies, stable performance with 100,000+ document chunks, 25+ simultaneous connections.
  • Business Impact
    • Productivity: 68% reduction in search time, 42% fewer repeated technical queries, saving 3.5 hours per employee weekly.
    • Accuracy: 92% on company-specific questions, 87% on technical queries, 95% accuracy in source identification.
    • User Satisfaction: 4.7/5 rating, 94% adoption among eligible employees, 76% reduction in support tickets.
    • ROI: Break-even in 4.2 months, 327% ROI over 12 months, estimated $142,000 annual savings for mid-sized deployments.

Why Oracle Cloud’s is A Strategic Advantage

One Lexiicon’s strategic collaboration with Oracle Cloud Infrastructure (OCI) was driven by OCI’s simplified procurement model, lower total cost of ownership, and consistent high availability across APAC and the Middle East. By integrating MySQL HeatWave on OCI, the team achieved a significant reduction in AI pipeline latency—accelerating both model training and inference. This collaboration not only enhanced performance but also positioned OCI as a compelling foundation for scaling enterprise-grade AI solutions in partnership with One Lexiicon’s innovation-driven approach.

Conclusion

The deployment of OwnGPT on OCI showcases what’s possible when deep industry knowledge meets robust, scalable AI infrastructure. The partnership delivered a measurable return on investment (ROI), faster insights, and improved end-user satisfaction. As One Lexiicon continues to pioneer AI-led business transformation, OCI will remain a key enabler of future innovation.