- Domain IT Software
- Availability Full-time
- Experience Senior Level
- Type of contract Indeterminate term
- Location Budapest
- Accommodation No
- Salary To be determined
- Verified company Yes
The Genesys Multicloud Reporting and Analytics platform is the foundation on which decisions are made that directly impact our customer’s experience as well as their customers’ experiences. We are a data-driven company, handling tens of millions of events per day to answer the questions for our customers and their business.
We own the real-time (up to the second) tooling allowing our customers to make informed business decisions to efficiently operate their contact center to long-term historical reporting to establish trends and develop insights. The team is responsible for terabytes of data in multiple cloud environments including Azure, AWS, Google and supporting customers operating the software on their own. We use the latest cutting-edge technologies, including Kubernetes, Kafka, Flink.
In this role, you’ll partner with the rest of the team of developers, quality engineers and one other DevOps engineer to develop software to operationalize the software to collect, process, store and analyze data at scale. The best person for this role will have a strong engineering background, have experience with enterprise grade software taking high level business requirements and realizing them into something real. We are a team whose focus is to operationalize big data products and curate high-value datasets and improve the reliability of the data platform as our usage continues to grow on a daily basis.
As a member of the team, you will:
- Help develop and deploy highly-available, fault-tolerant software and pipelines that will help drive improvements towards the features, reliability, performance, and efficiency of the Genesys Multicloud Reporting and Analytics platform.
- For pipelines, you will be using Terraform, Helm charts, Kubernetes and GitHub Actions to fully automate the CD pipelines for both software and infrastructure.
- The service has to be highly reliable and available. You will use Grafana, Loki, PagerDuty to provide best in class observability, allowing the troubleshooting and resolution of production incidents.
- Collaborate with other engineering teams and assist internal business teams to identify and resolve pain points, including reviewing service adoption and usage trends.
- Build, deploy, maintain, and automate large global deployments in AWS, Azure and Google.
- Work closely with the rest of the team, many of which are located all around the world.
Skills required:
- Degree in Computer Science or Computer Engineering, or equivalent experience
- Minimum 3 years in IT and minimum 2 years of DevOps experience
- Good understanding of the Linux
- Proven ability to write automation scripts (Bash, Python)
- Attention to details. Tendency to double check what you are doing
- Readiness to learn new technologies
- Great communication skills (English)
Additionally, experience in the following is welcomed:
- Experience with Docker, Kubernetes, Helm charts, Grafana
- Experience in AWS, Azure or any other public cloud
- Hands-on with Terraform, Helm, Jenkins, GitHub Actions or similar technologies
- Understanding of CI/CD processes
- DevOps or System Administrator experience
- Any development experience in Java
- Ask questions about the job before you go to an interview
- Don’t leave your original passport and ID to employers
- Don’t make any requested payments
- Research the recruiter and the company
- Read the contract before you sign it