12

I’m doing interviews for companies that would involve API integrations. I’ve done a couple now where I was given some general API information (some intentionally unclear, some more clear) and I felt I didn’t do well. Mainly I was nervous, and felt very pressured just to understand how the different parts of the APIs interact with each other and should be interacted with. This is despite doing this for work and myself not feeling as nervous doing more common coding tests which I don’t do as much at work(thanks to doing examples on hackerrank, Leetcode helping me feel more comfortable).

So what are the resources I should leverage to practice API integrations? How should I go about practicing? Especially considering that I do need to perform in a certain way during interviews.

you are viewing a single comment's thread
view the rest of the comments
[-] sloppy_diffuser@sh.itjust.works 2 points 2 months ago* (last edited 2 months ago)

Never heard the term API Integration before but guess I've been doing it for years including building them from scratch after looking up the term.

Internally we've always called it API Orchestration.

No idea what these companies are looking for, but I do most of the technical vetting for hiring, so maybe my advise will be helpful.

We always start at the model. We create models that act as parsers/validators for any external data entering our system (database, APIs, files). We never trust data that hasn't been parsed. We are a TypeScript shop and I don't allow any hard assertions / type casting.

We wrap every API call with a client library hiding the fact an API call is happening. It makes the API call and passes it to our parser.

When we need to transform data between two or more systems we create bidirectional transformers. These are "pure" transformation with no side effects.

We then connect it all together with controllers which house our business logic. They call our client libraries, make decisions, transform data with our transformers, read/write to the database, etc.

We use functional programming techniques so a controller is just a set of functions. We typically have specialised controllers focused on a specific resource / use case.

We lightly follow CQRS paradigms.

The same resource may have multiple models/controllers. We have a model for the client, the database, internal runtime, and view. One controller is usually focused on a single resource that maps between our clients, the database, and internal runtime model. This abstracts it away from the orchestration layer.

Our orchestration controllers only deal with connecting multiple resources, only seeing the runtime models and converting to the view model. Our handlers typically just call one function in these orchestration controllers.

The resource specific controllers are not aware of each other. This helps with cyclic dependencies. They can see each other's (runtime) models, but the orchestration layer is the only one that can fetch resource A from its non-orchestration controller to pass to a non-orchestration controller focused on resource B that needs it.

This sort of dependency inversion makes it super easy to test.

Edit: Summary...

  • Models for the I/O between external systems.
  • Pure transformers between models.
  • Resource specific controllers for mapping the same resource between different systems (API clients, Database, Runtime, etc.) following CQRS paradigms.
  • Orchestration controllers for connecting different resources using an internal runtime model and view/presentation model.
  • Avoids dependency cycles using CQRS and orchestration controllers.
  • Small, easily testable units.
  • Stateless (horizontally scalable). Not mentioned but implied with FP techniques which typically uses immutable data.
[-] Habahnow@sh.itjust.works 2 points 1 month ago

This information is helpful. How do you usually test potential hires? This is a bit interesting because a lot of this I'm unfamiliar with and seems like information that the clients I would work with would implement. Basically, I work with a software company helping others to integrate with our API. That can range from just ensuring they understand what information to expect from certain calls, down to sitting with the customer's engineers to determining why certain calls aren't returning information as expected. Regardless, this information is nice to read more about. Thank you

[-] sloppy_diffuser@sh.itjust.works 2 points 1 month ago* (last edited 1 month ago)

I don't work at a FAANG, so no leet code or tests. We also don't have the best pay. Not horrid, but probably 2/3s what a FAANG engineer makes. In addition, I don't work at a company known for software development. Our primary business is networks and connectivity.

That all said, it can be a pain to find candidates. Our functional programming tech stack is pretty niche to make matters worse just for perspective.

Interviews are typically:

  • Video conference screening by the hiring manager (30min).
  • Video conference technical screening by someone like me (1h)
  • In person technical screening/personality check panel interview (4-5 people asking questions for 1-2h with 2h split between 2 groups), followed by a VP interview (30min-1h), and a final interview with the hiring manager (30min). About 2.5-3.5h hours. 2h panel is usually only for higher level positions, 1h is more typical. We try to time it to take them to lunch if they want to go.

Technical interviews are mostly just talking shop. I have a list of topics and sample questions, but just as conversation starters:

  • Networking tech: what are sockets, DNS, HTTP(s), describe what a browser is doing in as much detail as possible when you go to a website.
  • Code quality questions: what is linting, static code analysis, formatting, etc. Ask about their editor setup to get realtime feedback from these tools.
  • Repo maintenance questions: monorepo, tsconfigs (typescript shop), transpiling, bundling.
  • Testing: unit, integration, perf, property-based.
  • Security: encryption, sanitizing data, OAuth
  • Language question: What are generics, dynamic vs static types, generators, async/multiprocess/threads, stack vs heap, recursion/tail recursion, immutable/stateless, OO vs imperative vs functional. Big-O. Lazy evaluation/monads for those claiming FP experience.
  • API: Rest, HTTP methods as verbs and resources as nouns, query/path/body params (when isn't body allowed), cookies, web sockets, GraphQL, HTTP response codes. OpenAPI/Swagger.
  • CI/CD: Tools they know, pipelines they've built.
  • Cloud: AWS questions, distributed system: redundancy/high availability/scaling, micro service, API gateway. Disaster recovery. K8s/docker. Database: SQL/noSQL, primary keys, indexes, ORMs, joins, map/reduce, etc. Linux: Probe some common commands or tasks until I find their upper limit. Data: Relational, normal vs non-normal form, one to one/one to many/many to many/DAGs (directed acyclic graphs).

What I'm looking for is someone who understands the technology they work with. Great candidates will relate the question to their experience, not give a text book response. I'll drill into more nuanced questions if I suspect someone just power studied and doesn't understand the tech.

For a junior, 10-20% familiarity with the questions is typical. For my level, 70% is ideal. I also don't ask all of them. If you have 5 years experience with databases I may spend 1-2min with some harder concepts to test the claim and move on.

The more you talk about your knowledge that's applicable to the role, the more likely my questions will be geared towards your experience.

Personal projects and a public repo will vastly boost your preinterview appearance. I will look at your code, your commit history, tooling you use, etc. This usually tells me more than any interview. I'm not looking for the perfect repo with 100% code coverage, greats docs and comments, etc. I'm looking to see how the code evolved from inception.

I look for contributions to other projects big and small. It tells me you had a problem or needed a feature and had the problem solving skills to make it happen. A big +1.

We also offer APIs which customers integrate against. We don't want our customers to have to reach out. So in that specific domain I'll ask about what can be done to avoid those calls. Great docs, FAQ, tutorials, code examples, reference implementations of a client using our APIs, tutorials to build those reference implementations step by step, etc.

Our entire back end has a simulator implementation. Clients get the source. They can test against it locally, look at the source to see what its doing, etc.

We try to provide really good error messages. If a client sends a bad request, we send back exactly where it failed to parse. Clients, through the simulator implementation, can look at the exact parser we use to work it out.

Linux experience is a huge plus. Public dotfiles gives me good insight ahead of the interview.

Personal projects don't need to be all code. I've hired bootcamp programmers who were previously a musician, a baker, and a carpenter. Attention to detail, problem solving, and passion towards achieving a goal are great personality traits.

I ask some pointed questions about AI. I love GPT, but more as a search engine replacement and not an answer book. I have a couple juniors who are really smart, but rely on it way to much. Its output never fits our style guidelines so it stands out. Its vastly hampered their ability to read code. They've improved with some interventions, but I try to sus out how dependant a candidate is on it.

Hopefully some of this word soup is of assistance!

[-] Habahnow@sh.itjust.works 1 points 1 month ago

Yeah this is great information I appreciate it!

this post was submitted on 14 Sep 2024
12 points (92.9% liked)

CS Career Questions

351 readers
1 users here now

Rules:

  1. Be welcoming - Not everyone is a 10 YOE senior engineer. Let's all help each other.
  2. No memes - Refer to the Programmer Humor community.

founded 1 year ago
MODERATORS