(Online - E) Frontend Best Practices 22: Code Reviews - Effective & Sustainable
Details
Code Reviews - Effective & Sustainable
(Talks will be in English)
We will cover tips and tricks for successful code reviews. Techniques and processes that have proven to make code reviews effective, efficient and sustainable.
Agenda 18:00 - 20:00
(Online, Teams, followed by open discussion in Wonder.me)
(Talks are in English)
- Intro: Markus Stolze (OST)
- Code Reviews @ Snowflake (Web Agency): Nicolas Karrer
- Code Reviews @ Swisscom: Dominique Bartholdi
- Code Reviews @ Axelra (Tech Venture Builder): Lucas Pelloni
- Code Reviews - Results from empirical research: Prof. Dr. Alberto Bacchelli (Univ. ZH, IFI)
- General Q& A + Panel-Discussion
Possible Topics to be Addressed
- What ist the main benefit of code reviews from your experience? (Code Quality?, Sharing Knowledge?, Spreading Ownership? Unifying Dev Practices?, ...)
- Are "traditional" code reviews still relevant? Some people favour pair programming (instant reviews) over code reviews. Reasoning: Code reviews are always "too late", error correction often postponed to "never" - which makes code reviews highly inefficient. If at all then (very) small pull request are the alternative to pair programming. "Traditional" code reviews are only relevant what code crosses organisational boundaries. What is your take on this?
- Answering a pull-request is more than entering LGTM after short contemplation (or isn't it?)
- How do you make sure the right person and the right number of persons can help with the review in due time? Ist it always seniors doing the review? How do you make sure the reviewer is aware of the guidelines and patterns of the language and is aware of the functional and quality requirements as well as the "architectural constraints"?
- How does this the above answer change for different types of code (new feature, bug-fix / roll back, small refactoring, large refactoring?
- How is the actual review delivered to the author? Text in a field, F2F meeting, ...? - Who determines this?
- How do you make sure that code review are perceived as fun and friendly learning experience?
- How do you make sure that code reviews don't regress into discussing "made decisions" (architectural, guideline, ...) while making sure that these decision get proper review from time to time?
- Any tips for Juniors preparing code for review by Seniors?
- Any tips for Seniors preparing code for review by Juniors?
- Any tips for Team-Leads on how to (not) organise code reviews?
- Reviewing code that had not been formatted according to guidelines (Prettier), linted (ESLint), statically analysed (TypeScript) and includes a sensible number of Unit-Tests does not make sense - or have you seen exceptions?
- How important is it in your context to provide code- and process-related metrics to authors, code-reviewers and team leads? (e.g. Number of variables, exports/imports, test coverage, cyclomatic complexity, prior defects)
- Do you work with checklists for reviewers (can you share?)
- How do you deal with topics that are not a "natural" focus of code reviews such as security, accessibility, duplication of existing functionality, (others?)
- How do you test "difficult to access" properties of code such as performance, scalability, resilience, usability.
- Do your authors use code walkthroughs (e.g. VSCode "Guided Tour" or Loom Video) for documenting code? Why/why not?
Follow us on Twitter for Updates: https://twitter.com/cas_fee/
Or contact: Markus.Stolze@ost.ch
Related topics
JavaScript
JavaScript Frameworks
Front-end Development
Software Development
Web Development
