Hacker Newsnew | past | comments | ask | show | jobs | submit | nmilford's commentslogin

PerformLine | Morristown, NJ or NYC | Full-Time | Onsite (remote for the right candidate)

PerformLine is the leading RegTech company delivering automated compliance solutions for enterprises looking to mitigate regulatory risk and ensure brand safety. We have systems that crawl the web and ingest emails, chats, and call audio. We then classify and score it based on a machine learning derived risk models and expert-curated compliance rule sets.

- Senior Software Engineer => https://grnh.se/2eae403e1 We're about halfway through our migration off of a Python monolith into a series services written in Golang and Python. We're building new channels that provide oppotrunities for working with social APIs, video and audio transcription, natural language processing, machine learning, unique data engineering problems (Big Data / NoSQL), and many other fun engineering problems to explore.

- Senior Front End Engineer => https://grnh.se/d3yp4s7u1 We're anticipating a need to retool and reformulate our front end as well, we'll want guidance on what approaches or frameworks to use moving forward. This is a chance for an engaged engineer to drive and own the front end and how it is built, from first principles.


I ran the preview for ~2 months on a few instances. No compatibility issues, but we had queries that were significantly slower than on a vanilla Postgres instance and a problem with temp table bloat that never got reclaimed (480G as per \l+, being shown as 1050G billable in the AWS console). The preview product team never got back to us so I'm going to switch back to the regular RDS this week and hope my bill doesn't get hosed by the unreclaimed bloat. Gotta go with the devil I know... ʅ(ツ)ʃ


I'm sorry the product team never got back to you. We'll get back to the email you just sent us on the preview thread.

If you have any issues with your bill, let us know and we'll happily look into it. But I'd rather work with you to figure out what went on and get it fixed for you!


Sure thing, feel free to reply to my email. \l+ shows 485 GB, and hitting temp_bytes from pg_stat_database is 0, but billable space is 1050G. I'd like to promote Aurora to production, but some of my system's 'legacy' queries which live behind an ORM are grinding against the Aurora instances. I haven't had need to really investigate/optimize the queries (or even try to pick apart what the ORM is doing) as the performance on the vanilla Postgres RDS instance wasn't was great, but wasn't problematic either, even on the db.t2.large's I'd use in lower environments vs db.r4.large's I'm using with Aurora. It may be that I am just missing something obvious.


I hope that’s what happens too. Another project I work on switched to Aurora for MySQL and it’s been phenomenal.

Really hoping for a similar experience with Postgres.


Yeah, but AWS has a long history of "yeah, no problem - oops will just take that off your bill" -- so just ask them to do so.

In fact, they are even proactive to the point where they come back and transparently have offered credits once their bills get fully baked.

We had a launch loop happen when an AWS EC2 API went bonkers and so we couldnt query number of running instances, so our balancer logic thought it had none and launched thousands of machines... didnt cost a penny.


To be fair, amazon's actual cost in that case was probably predominately in the human time to review the case and credit the account. I doubt it cost them more than a few pennies in terms of cost of goods sold.


True, but the way they handled it was stellar.


Oh, indeed. The interaction that probably cost them a few cents kept a very profitable customer coming back for more -- makes perfect sense.


Can you test this on the production version? I know that many issues were addressed during the preview period.


=> select aurora_version(); aurora_version ---------------- 1.0.7 (1 row)

Maybe it was solved and I need to build a new instance from snapshot to reclaim the space as an engine upgrade might not have done it. shrug I'll wait for the team to get back to me.


Sorry about the 404s, switching to a different caching plugin :P


It was a lot of fun!

Most of the trouble was herding all the people who showed up. That took 80% of the time, getting the cluster up once everyone was on the network was the easy part.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: