@WrRan
2016-11-24T08:33:46.000000Z
字数 2923
阅读 966
sails
From the root directory of sails:
$ mocha test/benchmarks
To get a more detailed report with millisecond timings for each benchmark, run:
$ mocha test/benchmarks -v
These tests are related to benchmarking the performance of different parts of Sails. For now, our benchmark tests should be "integration" or "acceptance" tests. By that, I mean they should measure a specific "user action" (e.g. running sails new
, running sails lift
, sending an HTTP request to a dummy endpoint, connecting a Socket.io client, etc.).
Feature-wide benchmarks are the "lowest-hanging fruit", if you will. We'll spend much less development time, and still get valuable benchmarks that will give us ongoing data on Sails performance. This way, we'll know where to start writing lower-level benchmarks to identify choke-points.
Advice from Felix Geisendörfer (@felixge)
- First of all, keep in mind our problems are definitely not the same as Felix's, and we must remember to follow his own advice:
[What]...does not work is taking performance advise (euro-sic) from strangers...
That said, he's got some great ideas.- Benchmark-Driven Optimization
- I also highly recommend this talk on optimization and benchmarking (slides).
Here are the most important things we need to benchmark:
Bootstrap
sails.load
(programmatic)sails.lift
(programmatic) and sails lift
(CLI)sails load
sails new
and sails generate *
Router
sails.emit('request')
Thankfully, the ORM is already covered by the benchmarks in Waterline core and its generic adapter tests.
Some important things to consider when benchmarking Node.js / Express-based apps in general:
maxSockets
, since most of the requests in a benchmark test are likely to originate from the same source.Sources:
+ https://groups.google.com/forum/#!topic/nodejs/tgATyqF-HIc
Don't know the best route here yet-- but here are some links for reference. Would love to hear your ideas!