Who are you?
We are seeking individuals with extensive experience building infrastructures that enable training, fine-tuning, and serving billion-parameter scale deep learning models, especially in the NLP domain using Pytorch and the huggingface ecosystem. You are a passionate and driven individual who strives to be their best everyday.
What you’ll be doing
As a Deep Learning Engineer in the team, you’ll have the opportunity to build the infrastructure for in house LLM’s and other deep learning in house models.
What should you have?
- 2+ years of experience working with large-scale Pytorch-based deep learning applications on GPUs and TPUs using CUDA in multi-node multi-GPU scenarios
- 2+ years of experience building, training and fine-tuning pipelines for large language models using distributed training approaches for both model and data
- 2+ years of experience building serving APIs for sub-second latency inference of large language models using various optimization techniques
- Extensive experience with Pytroch, Pytorch lightning, DeepSpeed, Megatron-LM, JAX/FLAX, and the Huggingface ecosystem
- 1+ years of experience working with ML lifecycle solutions such as Kubeflow, AWS Sagemaker, or Vertex AI for bringing machine learning solutions from research to production
Who are you?
You are an individual with extensive experience building large-scale Python backend systems that serve millions of users. You care about your impact and have a desire to be more than another number in a corporation. In essence, you are ready to wake up everyday and be extraordinary.
What you’ll be doing
As a Python Developer you will join the company’s efforts in building the backend controller. You will be responsible for designing and implementing large-scale backend applications using modern paradigms such as micro-services, event driven architectures, distributed computing and container orchestration.
What should you have?
- 5+ years of experience writing production-grade Python backend code for large-scale systems
- 3+ years of experience building APIs using REST, gRPC, and event-driven approaches
- Proven experience building scalable, high performance systems using modern backend paradigms such as Microservices, event-driven communication, Kubernetes, Serverless, etc.
- 3+ years of experience with multi-processing, multi-threading, and asynchronous programming in Python
- 3+ years of experience working with relational and non-relational databases, vector stores, and petabyte-scale cloud-based data-warehousing solutions such as Snowflake and BigQuery
- Strong knowledge in OOP, design patterns, and applying code infrastructure and code design in building scalable systems
- Advantage – experience working with ML, DL, or LLM capabilities as part of a larger production system
What will you be doing?
As a Product Manager, you will be a leader in both technical and business domains, combining them to generate significant leaps in product innovation. Our platform is the first Consumer AI Assistant based on generative AI. As such, it requires incredible process thinking and pattern recognition, alongside big-picture understanding of users, operations, data and growth.
Key Responsibilities
- Mastering simplicity and creative thinking, in and outside of your own set of products
- Owning business metrics across multiple technological environments and teams (around the globe)
- Defining a product vision and creating a roadmap, driving necessary execution activities to achieve your business metrics
- Leading processes from ideation, through design, development, to implementation, growth and learnings
- Communicate clearly tasks, priorities, experiments, and decisions across a wide spectrum of audiences from partner teams to executive levels
Who you are?
Passionate about the future of AI, with a deep understanding of business and technology. You enjoy discovering new stuff, are interested in leading people and ideas, then seeing them translated into impact. You read a lot. It can be the latest trends in AI, War and Peace or a review on the current exhibition at the Whitney – never mind what, the point is that you are curious about the world. In which case, you’ll enjoy the set of challenges we have to offer 🙂
What should you have?
- 3+ years of product experience, preferably some of it from a fast-paced startup environment
- Strong ability to make things happen around you
- Solid technical knowledge (no need for a CS degree). Working with AI-driven products is a plus
- Experience working in data-rich environments, both quantitative and qualitative
- Experience in designing and development product specs and working across multiple teams to ensure that results are delivered
Who are you?
You are a seasoned Data Engineer with a deep understanding of data modeling, massive parallel processing (in both realtime and batch) and bringing Machine learning capabilities into large-scale production systems. You have experience at a cutting edge startup and are passionate about building the data infrastructures that fuels the world’s first intelligent agent. You are a team player with excellent collaboration, communication skills and a “can do” approach
What you’ll be doing
You will contribute your extensive experience of building large-scale data-intensive systems in both realtime and offline scenarios
What should you have?
- 3+ years of experience building massive parallel processing solutions such as Spark, Presto and similar technologies
- 2+ years of experience developing real-time stream processing solutions using Apache Kafka or Amazon Kinesis
- 2+ years of experience developing infrastructures that bring machine learning capabilities to production, using solutions such as Kubeflow, Sagemaker and Vertex
- Demonstrated experience orchestrating containerized applications in AWS and GCP using EKS and GKE
- 3+ years of experience writing production-grade Python code and working with both relational and non-relational databases
- 2+ years of experience administering and designing cloud-based data warehousing solutions such as Snowflake or Amazon Redshift
- 2+ years of experience working with unstructured data, complex data sets, and data modeling