Ag –> AgTech [An industry transformed well beyond molecules and chemicals]
Building solutions for a sustainable future that will include a global population projected to eclipse 9.6 billion by 2050. We approach agriculture holistically, looking across a broad range of solutions from using biotechnology and plant breeding to produce the best possible seeds, to advanced predictive and prescriptive analytics designed to select the best possible crop system for every acre.
To make this possible, they collect terabytes of data across all aspects of its operations, from genome sequencing, crop field trials, manufacturing, supply chain, financial transactions and everything in between. There is an enormous need and potential here to do something that has never been done before. We need great people to help transform these complex scientific datasets into innovative software that is deployed across the pipeline, accelerating the pace and quality of all crop system development decisions to unbelievable levels.
Automating scientific data from legacy systems, augmenting the data and serving via a RESTful API. Kafka client = change event topic published from legacy, new scientific data is processed in minutes – keeping these comprehensive services up to date in near real time. Providing more than 12 billion marker calls @ your fingertips
– Go [Golang]
– Protocol buffers and gRPC
Experience with: Google Cloud Platform, Apache Beam and or Google Cloud Dataflow, protocol buffers and gRPC, Google Kubernetes Engine [GKE] or Kubernetes
– Experience working with scientific datasets, or a background in the application of quantitative science to business problems
– Experience building and maintaining data-intensive APIs using a RESTful approach
– Experience with stream processing using Apache Kafka.
To apply, please visit the following URL:https://remoteok.io/jobs/71508→