|
|
Created:
3 years, 8 months ago by AleBzk Modified:
3 years, 2 months ago CC:
webrtc-reviews_webrtc.org, AleBzk, peah-webrtc, Andrew MacDonald, aleloi, tterriberry_mozilla.com, audio-team_agora.io, hlundin-webrtc, kwiberg-webrtc, minyue-webrtc, the sun, aluebs-webrtc, bjornv1 Target Ref:
refs/heads/master Project:
webrtc Visibility:
Public. |
Descriptionaudioproc_f with simulated mic analog gain
The gain suggested by AGC is optionally used in audioproc_f to simulate analog gain applied to the mic.
The simulation is done by applying digital gain to the input samples.
This functionality is optional and disabled by default. If an AECdump is provided and the mic gain simulation is enabled, an extra "level undo" step is performed to virtually restore the unmodified mic signal.
Authors: alessiob, aleloi
BUG=webrtc:7494
Patch Set 1 : set_stream_analog_level and stream_analog_level moved into parent class AudioProcessingSimulator #
Total comments: 13
Patch Set 2 : Comments from Alex addressed #
Total comments: 13
Patch Set 3 : comments addressed #
Total comments: 6
Patch Set 4 : AGC simulated gain #
Total comments: 66
Patch Set 5 : FakeRecordingDevice interface simplified, UTs fixes, logs verbosity-- #
Total comments: 52
Patch Set 6 : comments addressed #
Total comments: 47
Patch Set 7 : FakeRecordingDevice refactoring, minor comments addressed #
Total comments: 3
Patch Set 8 : FakeRecordingDevice: API simplified, UTs adapted #
Total comments: 7
Patch Set 9 : fake rec device boilerplate reduced, aec dump simulated analog gain logic moved #
Total comments: 50
Patch Set 10 : Merge + comments addressed #
Total comments: 30
Patch Set 11 : comments from Per addressed #
Total comments: 12
Patch Set 12 : agc api call seq, zero undo mic gain, mic level members simplified #
Total comments: 4
Patch Set 13 : minor changes #
Total comments: 6
Patch Set 14 : AEC dump + fake rec device bugfix #
Total comments: 20
Messages
Total messages: 60 (20 generated)
Patchset #1 (id:1) has been deleted
Description was changed from ========== audioproc_f with simulated mic analog gain Make use of the gain suggested by AGC in audioproc_f by simulating analog gain applied to the mic. The simulation is done by applying digital gain to the input samples. This functionality is optional and disabled by default. If an AECdump is provided and the mic gain simulation is active, the suggested gains are only used if the AECdump does *not* contain the sequence of suggested gains. BUG=webrtc:7494 ========== to ========== audioproc_f with simulated mic analog gain Make use of the gain suggested by AGC in audioproc_f by simulating analog gain applied to the mic. The simulation is done by applying digital gain to the input samples. This functionality is optional and disabled by default. If an AECdump is provided and the mic gain simulation is active, the suggested gains are only used if the AECdump does *not* contain the sequence of suggested gains. BUG=webrtc:7494 ==========
alessiob@webrtc.org changed reviewers: + aleloi@webrtc.org
Hi Alex, This is a first patch set with some changes (incl. missing imports notified by the linter) and a couple of TODOs placed where I would apply the simulated analog mic gain. Essentially, I moved ap_->gain_control()->set_stream_analog_level() and ap_->gain_control()->stream_analog_level() to AudioProcessingSimulator removing code duplication. There is only a special case for the AEC dump based simulator, which I handle using an additional arg in AudioProcessingSimulator::ProcessStream (namely, skip_analog_level_update). Once I get your comments, I will fix and then run a local bit exactness test to check that the new logic behaves the same as the original audioproc_f - we're not applying any simulated gain yet.
What will happen if we always do the gain control level updating instead of passing the skip_analog_level_update along? I don't think it will affect the AdaptiveAnalog mode. LG but see comments. https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:170: if (!settings_.simulate_mic_gain && msg.has_level()) { The logic gets simpler (less negations) if skip_analog... is changed into do_analog_level_update. https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:108: if (!skip_analog_level_update) Add brackets. Also, suggest changing |if not skip_analog| into |if do_level_update|. https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:131: if (!skip_analog_level_update) Add brackets https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:132: last_specified_microphone_level_ = (settings_.simulate_mic_gain) ? Suggest drop parentheses. https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_processing_simulator.h (right): https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_processing_simulator.h:140: bool skip_analog_level_update = false); Suggest rename into do_analog_level_update instead. https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/wav_based_simulator.h (left): https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/wav_based_simulator.h:48: int last_specified_microphone_level_ = 100; Since the initial value is used in several places, I think it should be defined as 'constexpr int kInitialMicrophoneLevel = 100' somewhere.
https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:164: // If so and the analog gain simulation is disabled, inform If so and *if* the... At least, I think so
Hi Alex, Thanks a lot for your comments. PTAL, once you don't have further comments, I will test the bit exactness of audioproc_f to check that the logic does not change - no simulated analog gain happening in this CL. Alessio https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:164: // If so and the analog gain simulation is disabled, inform On 2017/04/21 11:52:29, aleloi wrote: > If so and *if* the... > > At least, I think so Done. https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:170: if (!settings_.simulate_mic_gain && msg.has_level()) { On 2017/04/21 11:46:42, aleloi wrote: > The logic gets simpler (less negations) if skip_analog... is changed into > do_analog_level_update. Done. https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:108: if (!skip_analog_level_update) On 2017/04/21 11:46:42, aleloi wrote: > Add brackets. Also, suggest changing |if not skip_analog| into |if > do_level_update|. Done. https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:131: if (!skip_analog_level_update) On 2017/04/21 11:46:42, aleloi wrote: > Add brackets Done. https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:132: last_specified_microphone_level_ = (settings_.simulate_mic_gain) ? On 2017/04/21 11:46:42, aleloi wrote: > Suggest drop parentheses. Done. https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_processing_simulator.h (right): https://codereview.webrtc.org/2834643002/diff/20001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_processing_simulator.h:140: bool skip_analog_level_update = false); On 2017/04/21 11:46:42, aleloi wrote: > Suggest rename into do_analog_level_update instead. Done.
lgtm
The CQ bit was checked by alessiob@webrtc.org
CQ is trying da patch. Follow status at: https://chromium-cq-status.appspot.com/v2/patch-status/codereview.webrtc.org/...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: presubmit on master.tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/presubmit/builds/16458)
alessiob@webrtc.org changed reviewers: + henrik.lundin@webrtc.org
I added Henrik, we need approval from one of the owners.
I'm having difficulties understanding the logic. https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:132: last_specified_microphone_level_ = settings_.simulate_mic_gain ? This is confusing. If update_analog_level is true, but we have not set simulate_mic_gain, then the effect of update_analog_level is essentially a reset. To me it seems that we are overloading the meaning of last_specified_microphone_level_ and update_analog_level. update_analog_level seems to specify that we should set and get the gain from gain_control(), but it also resets the level which was set from the AECdump file.
peah@webrtc.org changed reviewers: + peah@webrtc.org
Some drive-by comments. https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:163: // If the AECdump does not include a level, AGC was disabled during the call. No, I don't think it works like this. msg.has_level() is not added only when the AGC is on. Afaics, the level is always stored in the aecdump for newer recordings. (please correct me if I'm wrong though). https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:172: last_specified_microphone_level_ = msg.level(); I think the name last_specified_microphone_level_ is a bit misleading here. It was before this CL present only in the wav-based simulator as a crude way of mocking an analog gain via allowing the previously set level to be reported as the current mic gain level. Here, it is used in a different manner and I think the code would benefit from it being renamed. For instance, on 172, last_specified_microphone_level is set to the level in the aecdump which may not at all be the microphone level that was specified by the AGC (which is how it was before used). https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:172: last_specified_microphone_level_ = msg.level(); I think this CL would be easier to review with an explanation of how you plan to add the simulation of the analog gain. To me it would make sense to have it in the same place as where the msg.level() is used, so that it is clear where the selection is done, but you have not added that part yet, right? Could you please explain how that is to be added? As it is now, the simulation causes the reported analog gain to be bypassed, right? https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:107: bool update_analog_level) { I think the naming of the update_analog_level could be done to be more descriptive. I interpret it as a verb meaning that the analog level should be updated which I see as that the new leve reported by APM should be applied to update the analog gain. But that is not what it means, right? What it rather means, is that the reported analog level should be specified in the APM, which is quite different. https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:116: // TODO(alessiob): Apply last_specified_microphone_level_ to fwd_frame_ I think the approach to apply the microphone level is not sufficiently general. For instance, separating the gain with the gain application would need (at least) two different models of one microphone to be used to properly simulate soft-clipping. Have you considered bundling the simulation of the analog gain, and the gain application, into a virtual microphone model? https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:129: // Update last_specified_microphone_level_ using the value suggested by AGC the AGC https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:132: last_specified_microphone_level_ = settings_.simulate_mic_gain ? This is not correct. If you don't simulate the mic gain, you should not reset the gain. I think it works in practice though as you in the dump-simulator will set the last_specified_microphone_level_ before the next call. The else-statement in this expression is however fully redundant and misleading. I think the separation that is introduced between where the simulated gain, and the recorded gain are applied stored in order to be applied in the call to ap_->gain_control()->set_stream_analog_level(..) makes the code harder to read. I'd very much prefer for that to be done at the same place in an if-statement so that it is easy to see where how that gain is chosen/computed. https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/audioproc_float.cc:161: DEFINE_bool(simulate_mic_gain, I guess that in the long run you somehow need to specify what mic gain to use. Have you looked into how to specify that to audioproc_f? https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/wav_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/wav_based_simulator.cc:83: settings_.stream_drift_samples ? *settings_.stream_drift_samples : 0); What you do here is that you move the set_stream_analog_level call to the main simulator, and rely on the default parameter for it to always be called. That makes sense, but I'd rather make the default parameter an ordinary parameter in order to make this more explicit.
Thanks for all your comments. I think it's best to discuss the changes offline before I upload a new PS.
After our offline discussion, I made changes to this CL. I also took into account the comments you added in the last PS. I added TODOs in vision of a dependent CL (namely, https://codereview.webrtc.org/2846853002/).
Thanks for the new draft! I added some comments. It is, however, a bit hard to comment properly, as the comments are very much dependent on how the related CL https://codereview.webrtc.org/2846853002/ is written. I think this CL would benefit a lot if is merged with the other CL. https://codereview.webrtc.org/2834643002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:162: RTC_CHECK(msg.has_level()); // Level is always logged in AEC dumps. It is probably fine, but I haven't checked the protobuf storing code. To avoid errors, I'd choose to keep the optional level as it is though, since it does not really affect he rest of the code as much. If you are 100% sure that the level is always stored, this is a good change though. https://codereview.webrtc.org/2834643002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:166: ap_->gain_control()->set_stream_analog_level( What about putting the set_stream_analog_level call into AudioProcessingSimulator? The purpose of that is to contain what is common to both aecdump and wav file processing, and there seems to be code duplication here and in the WavBasedSimulator class. https://codereview.webrtc.org/2834643002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:169: // TODO(aleloi): If settings_.simulate_mic_gain, set undo level to It would be good to have FakeRecordingDevice as part of this CL. https://codereview.webrtc.org/2834643002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:573: ap_->gain_control()->stream_analog_level(); What about putting the stream_analog_level call into AudioProcessingSimulator? The purpose of that is to contain what is common to both aecdump and wav file processing, and there seems to be code duplication here and in the WavBasedSimulator class. https://codereview.webrtc.org/2834643002/diff/60001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/wav_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/wav_based_simulator.cc:88: // TODO(aleloi): No undo level to set, i.e., no call to Is this todo really needed? It is a todo about not to do something which is not done. https://codereview.webrtc.org/2834643002/diff/60001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/wav_based_simulator.cc:152: // TODO(aleloi): If settings_.simulate_mic_gain, set the returned value into This makes sense. What about putting that into AudioProcessingSimulator instead?
Description was changed from ========== audioproc_f with simulated mic analog gain Make use of the gain suggested by AGC in audioproc_f by simulating analog gain applied to the mic. The simulation is done by applying digital gain to the input samples. This functionality is optional and disabled by default. If an AECdump is provided and the mic gain simulation is active, the suggested gains are only used if the AECdump does *not* contain the sequence of suggested gains. BUG=webrtc:7494 ========== to ========== audioproc_f with simulated mic analog gain The gain suggested by AGC is optionally used in audioproc_f to simulate analog gain applied to the mic. The simulation is done by applying digital gain to the input samples. This functionality is optional and disabled by default. If an AECdump is provided and the mic gain simulation is enabled, an extra level undo step is performed to virtually restore the unmodified mic signal. BUG=webrtc:7494 ==========
Finally here, PTAL
Patchset #4 (id:80001) has been deleted
Description was changed from ========== audioproc_f with simulated mic analog gain The gain suggested by AGC is optionally used in audioproc_f to simulate analog gain applied to the mic. The simulation is done by applying digital gain to the input samples. This functionality is optional and disabled by default. If an AECdump is provided and the mic gain simulation is enabled, an extra level undo step is performed to virtually restore the unmodified mic signal. BUG=webrtc:7494 ========== to ========== audioproc_f with simulated mic analog gain The gain suggested by AGC is optionally used in audioproc_f to simulate analog gain applied to the mic. The simulation is done by applying digital gain to the input samples. This functionality is optional and disabled by default. If an AECdump is provided and the mic gain simulation is enabled, an extra "level undo" step is performed to virtually restore the unmodified mic signal. Authors: alessiob, aleloi BUG=webrtc:7494 ==========
The CQ bit was checked by aleloi@webrtc.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at: https://chromium-cq-status.appspot.com/v2/patch-status/codereview.webrtc.org/...
LG! Some comments in the unit test, though. https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:163: // If the AECdump does not include a level, AGC was disabled during the call. On 2017/04/26 12:54:44, peah-webrtc wrote: > No, I don't think it works like this. msg.has_level() is not added only when the > AGC is on. Afaics, the level is always stored in the aecdump for newer > recordings. (please correct me if I'm wrong though). I just checked, msg->set_level(gain_control()->stream_analog_level()); is always called. Maybe we should to a simplification and not consider the case !msg.has_level() ? https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:172: last_specified_microphone_level_ = msg.level(); On 2017/04/26 12:54:44, peah-webrtc wrote: > I think the name last_specified_microphone_level_ is a bit misleading here. > > It was before this CL present only in the wav-based simulator as a crude way of > mocking an analog gain via allowing the previously set level to be reported as > the current mic gain level. > > Here, it is used in a different manner and I think the code would benefit from > it being renamed. For instance, on 172, last_specified_microphone_level is set > to the level in the aecdump which may not at all be the microphone level that > was specified by the AGC (which is how it was before used). Acknowledged. https://codereview.webrtc.org/2834643002/diff/40001/webrtc/modules/audio_proc... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:572: PrepareProcessStreamCall(msg, &update_analog_level); I think update_analog_level should be renamed in the method as well. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:172: << msg.level(); This would get logged once for every audio frame. Even for LS_VERBOSE it seems a bit too much. I don't know how what typically use verbose logging for, though. It seems to be used a bit in neteq_impl.cc https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:177: if (!settings_.simulate_mic_gain) { Suggest merge into the if statement above as an else branch. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:80: LOG(LS_INFO) << "[fake rec device] Cannot undo analog level of 0."; Maybe this one should be verbose? It's probably not very important information: if the level is 0, the signal is probably 0 as well. If the level is zero for a period of time (which could conceivably happen), this will also flood the log with identical messages. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:42: // 0.f, and 255 is 1.0f. Nit: please indent comments https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:54: LevelToScalingMappingKind mapping_kind() const; Is mapping_kind() used anywhere? https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:56: void ProcessStream(std::vector<rtc::ArrayView<const float>> src, Perhaps add a comment along the lines of 'undoes the gain reported by NotifyAudioDeviceLevel and applies the gain reported by set_analog_level(). https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:68: // level, gain curve (which depends on |mapping_kind_) and the Nit: formatting, |mapping_kind_|) https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:25: const std::vector<MappingKind> kMappingKindsToTest = I suggest adding TODO(...): add new mapping kinds here as they are added to LevelToScalingMappingKind. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:31: // Creates a vector of ArrayView instances pointing to distinct array of floats. Comment is wrong. Should be 'vector of *vectors*'. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:36: } I think we improve readability by removing this function. Compare auto data = CreateOutputMultiChannelBuffer(channels, samples); auto data = std::vector<std::vector<float>(channels, std::vector<float>(samples, 0)); In the first case, it's not immediately clear what type data is and what kind of buffer the function returns. The reader has to check its implementation. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:43: vectorOfArrayViews.emplace_back(rtc::MakeArrayView(v.data(), v.size())); Does .emplace_back(v) work? I think ArrayView has a std::vector c-tor. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:53: vectorOfArrayViews.emplace_back(rtc::MakeArrayView(v.data(), v.size())); Change to emplace_back(v) if it works. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:79: auto fsgn = [](float x) { return static_cast<int> const auto https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:80: ((x < 0) ? -1 : ((x > 0) ? 1 : 0)); }; I prefer if / else if / else to two nested ternary operators. Also, why 'static_cast<int>'? if x < 0: return -1; else if x == 0: return 0 else: return 1 https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:116: auto src_multichannel_samples_view = VecVecToVecArrayViewOfConst( Should be 'const auto' because we don't write to it, I think. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:144: auto dst_multichannel_samples_prev = CreateOutputMultiChannelBuffer( Suggest instead copy 'dst = kTestMultiChannelSamples'. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:149: auto dst_multichannel_samples = CreateOutputMultiChannelBuffer( And also copy here. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:156: for (auto kind : kMappingKindsToTest) { Nit: prefer 'const auto &' to plain 'auto' if possible. It doesn't make a difference here, but a reader may think 'why does it have to be plain auto? is something copied? what is kMappingKindsToTest' etc. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:176: dst_multichannel_samples_prev_view.swap(dst_multichannel_samples_view); I think the naming doesn't represent how the vectors are used. Every second iteration, 'prev_view' points to '_samples' and 'samples_view' points to '_prev'. The vectors '_prev/_samples' are rather working buffers. WDYT re this instead: // Update prev and views: dst_multichannel_samples_prev = dst_multichannel_samples; dst_multichannel_samples_prev_view = ..ToArrayView(_prev); https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:183: auto src_multichannel_samples_view = VecVecToVecArrayViewOfConst( const auto
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: ios32_sim_ios9_dbg on master.tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/ios32_sim_ios9_dbg/buil...) ios64_sim_ios10_dbg on master.tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/ios64_sim_ios10_dbg/bui...) ios_arm64_dbg on master.tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/ios_arm64_dbg/builds/20068) ios_dbg on master.tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/ios_dbg/builds/25382) ios_rel on master.tryserver.webrtc (JOB_FAILED, http://build.chromium.org/p/tryserver.webrtc/builders/ios_rel/builds/24260)
Hi, Thanks for the new patch! I have added some comments. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:161: // TODO(peah): Add support for controlling the analog level via the You can remove this TODO in this CL :-) https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:170: fake_recording_device_.NotifyAudioDeviceLevel(msg.level()); I'd prefer all the handling of the fake_recording_device_ to go into the audio_processing_simulator code. That would simplify the usage of the device, as at least to me the usage of Notification, setting of the analog level and ProcessStream is very distributed and hard to overlook. My suggestion would be to solve this by simply passing the msg.level() into the ProcessStream call on line 577. Then you'd be able to do all of this inside ProcessStream, and aec_dump_simulator would not need to use FakeAudioDevice. Something that would conform even more to how the incoming audio streams are used would maybe be to update a state variable with the new level, similarly to how it is done for std::unique_ptr<ChannelBuffer<float>> in_buf_, etc. I.e., not pass this information via the ProcessStream call. I think that would simplify the use of this from the WavBasedSimulator as well (as otherwise you'd need to have a different ProcessStream call for that (or use an optional in the call). In code class AudioProcessingSimulator { void ProcessStream() { ... if (settings_.simulate_mic_gain) { fake_audiodevice.ProcessStream(level_for_input_, samples); ap_->gain_control()->set_stream_analog_level(fake_recording_device_.analog_level()); } else { ap_->gain_control()->set_stream_analog_level(*level_for_input_); } ap_->ProcessStream(samples); ... } rtc::Optional<int> level_for_input_; }; class AecDumpBasedSimulator::PrepareProcessStreamCall() { ... level_for_input_ = rtc::Optional<int>(msg.level()); ... WDYT? https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:171: LOG(LS_VERBOSE) << "setting mic gain undo level from AEC dump to " setting->Setting? https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:172: << msg.level(); On 2017/05/04 12:47:13, aleloi wrote: > This would get logged once for every audio frame. Even for LS_VERBOSE it seems a > bit too much. I don't know how what typically use verbose logging for, though. > It seems to be used a bit in neteq_impl.cc Yes, I agree. The logging here is only in test code, but it will overwhelm the log with these log messages so I don't think we want that. For logging I think it more make sense to signal certain events. This is something that we know always happens so we don't need to log that. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:114: LOG(LS_VERBOSE) << "AGC set_stream_analog_level set to " Too verbose logging. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:117: ap_->gain_control()->set_stream_analog_level( I'd prefer a decoupling between the stored stream analog level and the fake audio device for the case when there is no simulation of the analog gain, and I think the code will become simpler for that. With the suggestions elsewhere in the code, this method can be simplified to void AudioProcessingSimulator::ProcessStream(bool fixed_interface) if (settings_.simulate_mic_gain) { if (fixed_interface) { fake_audiodevice.Transform(level_for_input_, &fwd_frame); } else { fake_audiodevice.Transform(level_for_input_, in_buf_->channels()); } } ap_->gain_control()->set_stream_analog_level(settings_.simulate_mic_gain ? fake_recording_device_.analog_level() : *level_for_input_); if (fixed_interface) { ap_->ProcessStream(&fwd_frame_) } else { ap_->ProcessStream(in_buf_->channels(), in_config_, out_config_, out_buf_->channels()); } fake_recording_device_.set_analog_level(ap_->gain_control()->stream_analog_level()); WDYT? https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:153: LOG(LS_VERBOSE) << "AGC stream_analog_level() returned " This will become too much logging. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.h (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.h:33: constexpr int kInitialMicrophoneGainLevel = 100; Does this need to go into the header file? It is only used in audio_processing_simulator.cc, right? https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.h:35: constexpr FakeRecordingDevice::LevelToScalingMappingKind kDefaultMicKind = Does this need to go into the header file? It is only used in audio_processing_simulator.cc, right? https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:24: // Class for simulating an analog gain controller controlled by I don't think this comment is correct. The class is not simulating the analog gain controller. It is simulating a recording device. Furthermore, I think it could do more than abstracting non-linearities in the gain curve so I don't think that is needed to be specified in the comment. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:30: // set_analog_level(|level suggested by GainControl|); This does not really reflect the way it is currently used in the code, right? https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:56: void ProcessStream(std::vector<rtc::ArrayView<const float>> src, Would it be possible to change ProcessStream to something more descriptive? In APM, ProcessStream is vague by purpose, and the submodule apis have adapted that, probably both because their operations are not easy to state in one word but also in order to conform to the pattern. In this case, we can be more clear on what we do so I'd prefer something more descriptive. In particular as this is used in AudioProcessingSimulator which is using are 2 other methods named ProcessStream. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:59: void ProcessStream(const AudioFrame* src, AudioFrame* dest); I think the usage of this class is more complicated than it needs to. I'd prefer one single call to simulate the microphone gain application. Furthermore, I see no need or usage for separated input and output variables. Instead, that separation causes a lot of boilerplate code to be present in AudioProcessingSimulator. My proposal would be to specify the reported gain, the new gain, and the input/output stream in one single call. They belong together so I don't think there is any need to separate them: void Transform(optional<int> mic_level_for_input, AudioFrame* samples). I think that would significantly simplify the code using this class. WDYT?
Thanks a lot for your comments. I addressed everything and also added the initial mic gain as an audioproc_f param (default 100 as hard-coded before this change). PTAL https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:161: // TODO(peah): Add support for controlling the analog level via the On 2017/05/05 06:28:41, peah-webrtc wrote: > You can remove this TODO in this CL :-) Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:170: fake_recording_device_.NotifyAudioDeviceLevel(msg.level()); On 2017/05/05 06:28:41, peah-webrtc wrote: > I'd prefer all the handling of the fake_recording_device_ to go into the > audio_processing_simulator code. That would simplify the usage of the device, as > at least to me the usage of Notification, setting of the analog level and > ProcessStream is very distributed and hard to overlook. Right. I made fake_recording_device_ private instead of protected. Plus, FakeRecordingDevice::SimulateAnalogGain() now also gets the mic level to simulate and the level of the real mic device (if known). > > My suggestion would be to solve this by simply passing the msg.level() into the > ProcessStream call on line 577. Then you'd be able to do all of this inside > ProcessStream, and aec_dump_simulator would not need to use FakeAudioDevice. > > Something that would conform even more to how the incoming audio streams are > used would maybe be to update a state variable with the new level, similarly to > how it is done for std::unique_ptr<ChannelBuffer<float>> in_buf_, etc. I.e., not > pass this information via the ProcessStream call. I think that would simplify > the use of this from the WavBasedSimulator as well (as otherwise you'd need to > have a different ProcessStream call for that (or use an optional in the call). Yup. I think the changes I've made to address other comments cover this. Take a look and let me know. > > In code > class AudioProcessingSimulator { > > > void ProcessStream() { > ... > if (settings_.simulate_mic_gain) { > fake_audiodevice.ProcessStream(level_for_input_, samples); > > ap_->gain_control()->set_stream_analog_level(fake_recording_device_.analog_level()); > } > else { > ap_->gain_control()->set_stream_analog_level(*level_for_input_); > } > ap_->ProcessStream(samples); > > ... > } > > > rtc::Optional<int> level_for_input_; > }; > > class AecDumpBasedSimulator::PrepareProcessStreamCall() { > ... > level_for_input_ = rtc::Optional<int>(msg.level()); > ... > > > > > WDYT? > https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:171: LOG(LS_VERBOSE) << "setting mic gain undo level from AEC dump to " On 2017/05/05 06:28:41, peah-webrtc wrote: > setting->Setting? Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:172: << msg.level(); On 2017/05/05 06:28:41, peah-webrtc wrote: > On 2017/05/04 12:47:13, aleloi wrote: > > This would get logged once for every audio frame. Even for LS_VERBOSE it seems > a > > bit too much. I don't know how what typically use verbose logging for, though. > > It seems to be used a bit in neteq_impl.cc > > Yes, I agree. The logging here is only in test code, but it will overwhelm the > log with these log messages so I don't think we want that. For logging I think > it more make sense to signal certain events. This is something that we know > always happens so we don't need to log that. I moved this in the ctor so it's shown once. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:177: if (!settings_.simulate_mic_gain) { On 2017/05/04 12:47:13, aleloi wrote: > Suggest merge into the if statement above as an else branch. Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:177: if (!settings_.simulate_mic_gain) { On 2017/05/04 12:47:13, aleloi wrote: > Suggest merge into the if statement above as an else branch. Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:114: LOG(LS_VERBOSE) << "AGC set_stream_analog_level set to " On 2017/05/05 06:28:41, peah-webrtc wrote: > Too verbose logging. I only removed the log below (namely, LOG(LS_VERBOSE) << "AGC stream_analog_level() returned "). Note that I'm using LS_VERBOSE as sink and that it's not shown by default (--verbose must be specified). I think this log is useful to extract AGC suggested values. WDYT? https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:117: ap_->gain_control()->set_stream_analog_level( On 2017/05/05 06:28:41, peah-webrtc wrote: > I'd prefer a decoupling between the stored stream analog level and the fake > audio device for the case when there is no simulation of the analog gain, and I > think the code will become simpler for that. > > With the suggestions elsewhere in the code, this method can be simplified to > > void AudioProcessingSimulator::ProcessStream(bool fixed_interface) > if (settings_.simulate_mic_gain) { > if (fixed_interface) { > fake_audiodevice.Transform(level_for_input_, &fwd_frame); > } > else { > fake_audiodevice.Transform(level_for_input_, in_buf_->channels()); > } > } > ap_->gain_control()->set_stream_analog_level(settings_.simulate_mic_gain ? > fake_recording_device_.analog_level() : *level_for_input_); > > if (fixed_interface) { > ap_->ProcessStream(&fwd_frame_) > } > else { > ap_->ProcessStream(in_buf_->channels(), in_config_, out_config_, > out_buf_->channels()); > } > > > fake_recording_device_.set_analog_level(ap_->gain_control()->stream_analog_level()); > > > WDYT? Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:153: LOG(LS_VERBOSE) << "AGC stream_analog_level() returned " On 2017/05/05 06:28:41, peah-webrtc wrote: > This will become too much logging. Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.h (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.h:33: constexpr int kInitialMicrophoneGainLevel = 100; On 2017/05/05 06:28:41, peah-webrtc wrote: > Does this need to go into the header file? It is only used in > audio_processing_simulator.cc, right? Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.h:35: constexpr FakeRecordingDevice::LevelToScalingMappingKind kDefaultMicKind = On 2017/05/05 06:28:41, peah-webrtc wrote: > Does this need to go into the header file? It is only used in > audio_processing_simulator.cc, right? Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:80: LOG(LS_INFO) << "[fake rec device] Cannot undo analog level of 0."; On 2017/05/04 12:47:13, aleloi wrote: > Maybe this one should be verbose? It's probably not very important information: > if the level is 0, the signal is probably 0 as well. If the level is zero for a > period of time (which could conceivably happen), this will also flood the log > with identical messages. Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:24: // Class for simulating an analog gain controller controlled by On 2017/05/05 06:28:41, peah-webrtc wrote: > I don't think this comment is correct. The class is not simulating the analog > gain controller. It is simulating a recording device. Furthermore, I think it > could do more than abstracting non-linearities in the gain curve so I don't > think that is needed to be specified in the comment. Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:30: // set_analog_level(|level suggested by GainControl|); On 2017/05/05 06:28:41, peah-webrtc wrote: > This does not really reflect the way it is currently used in the code, right? Right. I'll make it more generic. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:42: // 0.f, and 255 is 1.0f. On 2017/05/04 12:47:13, aleloi wrote: > Nit: please indent comments Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:54: LevelToScalingMappingKind mapping_kind() const; On 2017/05/04 12:47:13, aleloi wrote: > Is mapping_kind() used anywhere? Nope. Removed. I had added it when I made the kind non-const. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:56: void ProcessStream(std::vector<rtc::ArrayView<const float>> src, On 2017/05/05 06:28:41, peah-webrtc wrote: > Would it be possible to change ProcessStream to something more descriptive? In > APM, ProcessStream is vague by purpose, and the submodule apis have adapted > that, probably both because their operations are not easy to state in one word > but also in order to conform to the pattern. > > In this case, we can be more clear on what we do so I'd prefer something more > descriptive. In particular as this is used in AudioProcessingSimulator which is > using are 2 other methods named ProcessStream. Right. I went for SimulateAnalogGain() https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:59: void ProcessStream(const AudioFrame* src, AudioFrame* dest); On 2017/05/05 06:28:41, peah-webrtc wrote: > I think the usage of this class is more complicated than it needs to. > > I'd prefer one single call to simulate the microphone gain application. > Furthermore, I see no need or usage for separated input and output variables. > Instead, that separation causes a lot of boilerplate code to be present in > AudioProcessingSimulator. > > My proposal would be to specify the reported gain, the new gain, and the > input/output stream in one single call. They belong together so I don't think > there is any need to separate them: > void Transform(optional<int> mic_level_for_input, AudioFrame* samples). > > > I think that would significantly simplify the code using this class. > > WDYT? > Thanks for this comment. I think the change you propose fits well with having a private/protected member in AudioProcessingSimulator to store the suggested mic level and, optionally, the undo level. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:68: // level, gain curve (which depends on |mapping_kind_) and the On 2017/05/04 12:47:13, aleloi wrote: > Nit: formatting, |mapping_kind_|) Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:25: const std::vector<MappingKind> kMappingKindsToTest = On 2017/05/04 12:47:14, aleloi wrote: > I suggest adding > > TODO(...): add new mapping kinds here as they are added to > LevelToScalingMappingKind. Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:31: // Creates a vector of ArrayView instances pointing to distinct array of floats. On 2017/05/04 12:47:14, aleloi wrote: > Comment is wrong. Should be 'vector of *vectors*'. Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:36: } On 2017/05/04 12:47:14, aleloi wrote: > I think we improve readability by removing this function. Compare > > auto data = CreateOutputMultiChannelBuffer(channels, samples); > > auto data = std::vector<std::vector<float>(channels, std::vector<float>(samples, > 0)); > > In the first case, it's not immediately clear what type data is and what kind of > buffer the function returns. The reader has to check its implementation. Thanks for this comment. There's no need anymore for this function since, as you suggest further down, I can just make a deep copy of kTestMultiChannelSamples. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:43: vectorOfArrayViews.emplace_back(rtc::MakeArrayView(v.data(), v.size())); On 2017/05/04 12:47:14, aleloi wrote: > Does .emplace_back(v) work? I think ArrayView has a std::vector c-tor. Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:53: vectorOfArrayViews.emplace_back(rtc::MakeArrayView(v.data(), v.size())); On 2017/05/04 12:47:14, aleloi wrote: > Change to emplace_back(v) if it works. Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:79: auto fsgn = [](float x) { return static_cast<int> On 2017/05/04 12:47:14, aleloi wrote: > const auto Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:80: ((x < 0) ? -1 : ((x > 0) ? 1 : 0)); }; On 2017/05/04 12:47:14, aleloi wrote: > I prefer if / else if / else to two nested ternary operators. Also, why > 'static_cast<int>'? > > if x < 0: return -1; > else if x == 0: return 0 > else: return 1 I thought that static_cast <int> was safer since the lambda function is defined with auto. But it works without it as well, so let's get rid of it. Due to this change, everything fits in 1 line. I'd keep the nested way, which is also used elsewhere (see https://cs.chromium.org/search/?q=%5C?.%2B%5C:.%2B%5C?+package:%5Echromium$+f...). https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:116: auto src_multichannel_samples_view = VecVecToVecArrayViewOfConst( On 2017/05/04 12:47:14, aleloi wrote: > Should be 'const auto' because we don't write to it, I think. Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:144: auto dst_multichannel_samples_prev = CreateOutputMultiChannelBuffer( On 2017/05/04 12:47:14, aleloi wrote: > Suggest instead copy 'dst = kTestMultiChannelSamples'. Great point! Much simpler now and no need for CreateOutputMultiChannelBuffer() anymore. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:149: auto dst_multichannel_samples = CreateOutputMultiChannelBuffer( On 2017/05/04 12:47:14, aleloi wrote: > And also copy here. Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:156: for (auto kind : kMappingKindsToTest) { On 2017/05/04 12:47:14, aleloi wrote: > Nit: prefer 'const auto &' to plain 'auto' if possible. It doesn't make a > difference here, but a reader may think 'why does it have to be plain auto? is > something copied? what is kMappingKindsToTest' etc. Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:176: dst_multichannel_samples_prev_view.swap(dst_multichannel_samples_view); On 2017/05/04 12:47:14, aleloi wrote: > I think the naming doesn't represent how the vectors are used. Every second > iteration, 'prev_view' points to '_samples' and 'samples_view' points to > '_prev'. The vectors '_prev/_samples' are rather working buffers. WDYT re this > instead: > > // Update prev and views: > dst_multichannel_samples_prev = dst_multichannel_samples; > dst_multichannel_samples_prev_view = ..ToArrayView(_prev); Done. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:183: auto src_multichannel_samples_view = VecVecToVecArrayViewOfConst( On 2017/05/04 12:47:14, aleloi wrote: > const auto Done.
Hi, Thanks for the new patch. I have some more comments. https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:114: LOG(LS_VERBOSE) << "AGC set_stream_analog_level set to " On 2017/05/05 12:20:17, AleBzk wrote: > On 2017/05/05 06:28:41, peah-webrtc wrote: > > Too verbose logging. > > I only removed the log below (namely, LOG(LS_VERBOSE) << "AGC > stream_analog_level() returned "). > Note that I'm using LS_VERBOSE as sink and that it's not shown by default > (--verbose must be specified). > > I think this log is useful to extract AGC suggested values. > > WDYT? I would not analyze the AGC suggested values like that. What I instead do is to dump the to a file using the ApmDataDumper functionality. That allows the values to be analyzed using numpy/etc. It may be useful to use the LOG feature to extract only the AGC suggested values, but it would not scale for doing the same for other things. The log would be unreadable with all the data coming being logged. If would prefer using the LOG functionality for doing analysis, I would only add it briefly, and not having it as a permanent logging. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/BUILD.gn (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/BUILD.gn:675: rtc_source_set("fake_recording_device") { Is it really worthwhile having a separate build target for the fake_recording_device? I'd suggest to put it in the audioproc_f build target as: 1) It is always build together with that. 2) The build file becomes shorter. 3) The build file becomes more easy to read. WDYT? https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:71: LOG(LS_VERBOSE) << "Setting mic gain undo level from AEC dump"; I think that the log line is a bit unclear. What about Simulating analog microphone gain https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:176: LOG(LS_VERBOSE) << "AEC dump overriding AGC suggested level to " (last_specified_microphone_level_ != msg.level()) will be true all the time if you are simulating the microphone behavior. And according to previous statements, I thing the value to have permanent log lines for this is very limited. I therefore think you should remove lines 173-178. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:124: std::vector<rtc::ArrayView<const float>> data_view; The lines 122-129 are only there to conform to the API of SimulateAnalogGain. As that API is only used here, I'd prefer to change SimulateAnalogGain to be able to receive in_buf_ as it is, which would avoid the need for 122-129. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:159: last_specified_microphone_level_ = ap_->gain_control()->stream_analog_level(); With the changes done in this CL, I think last_specified_microphone_level_ should be renamed as it is used a bit different. What about last_specified_microphone_level_ -> new_analog_level_ ? https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:168: << stats.residual_echo_likelihood << ", "; Unrelated change? https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:34: [scaling_factor](float x) { return scaling_factor * x; }); This scaling could create absolute sample values above 1.0f. Saturation is needed for this to be realistic. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:52: return rtc::saturated_cast<int16_t>(scaling_factor * x); This simulation of the microphone scaling is too simple, but could serve as a starting point. However, as the rest of the code indicates that the microphone type (kind) only affects the gain curve mapping I want to point this out already now. For instance, to properly simulate a microphone (for the purpose of the AGC development) you will almost immediately run into the need for simulating soft-saturation, which is not handled in the current design. I would rather have prefferred a layout where there is a call to a separate microphone simulation method for each of the microphone types supported. The same applies to the method on line 22. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:63: LOG(LS_VERBOSE) << "[fake rec device] Cannot undo analog level of 0."; As this is only simulation/testing code, you should not include error handling. Instead just DCHECK as that will simplify the code a lot. That is also how it is handled elsewhere in audioproc_f. Silently (at least if you don't write out the verbose log-lines) will easily cause this error to go unnoticed. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:23: // Class for simulating a microphone with analog gain. The class wraps a mapping This seems out of date, could you please update. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:29: // NotifyAudioDeviceLevel(|recorded level of real microphone|); // Optional! NotifyAudioDeviceLevel Is no longer present, right? https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:38: enum class LevelToScalingMappingKind { I think the mapping name is too specific. What if you want to simulate two microphones that both have a linear mapping, but that differ in the way the soft-saturation works. I'd propose to call this something more general, like MicrophoneKind. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:52: void SimulateAnalogGain(std::vector<rtc::ArrayView<const float>> src, Seeing how this is used, I don't see why there needs to be separate outputs and inputs (now the same arrayview is used for input as for output). This is in theory error-prone, if the implementation would rely on them being different. I'd rather prefer to have one input/output variable as that is how it is used. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:53: std::vector<rtc::ArrayView<float>> dest, The style guide says that input-only parameters should come first in the parameter list. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:55: int real_device_level = kRealDeviceLevelUnknown); As real_device_level is actually always used whenever FakeAudioDevice is used, there is no need for a default parameter value. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:66: // - gain curve (which depends on |mapping_kind_|), The style guide says "Declaration comments describe use of the function (when it is non-obvious); comments at the definition of a function describe operation." As the method is private and the class is small, I'd prefer to move all comments to the source file for this method. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/wav_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/wav_based_simulator.cc:64: real_recording_device_level_ = FakeRecordingDevice::kRealDeviceLevelUnknown; This seems like exactly what to use Optional for. I'd rather prefer real_recording_device_level_ to be an rtc::Optional<int> than having to rely on magic values for that. With that, these lines can be removed.
Small quick comment response re build files and GN. More will come later. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/BUILD.gn (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/BUILD.gn:675: rtc_source_set("fake_recording_device") { On 2017/05/05 20:25:21, peah-webrtc wrote: > Is it really worthwhile having a separate build target for the > fake_recording_device? I'd suggest to put it in the audioproc_f build target as: > 1) It is always build together with that. > 2) The build file becomes shorter. > 3) The build file becomes more easy to read. > > WDYT? I don't agree with 3) in this case. Having a modular build shows what logical sub-components a big target can be divided in, and shows dependencies between sub-components. It's another clue for understanding the design of something. There are additional benefits: * it would get simpler to use a 'fake_recording_device' for something else * GN can help enforce design modularity checking that header includes are stated as dependencies Link to 3 slides about build design of a (public) presentation on GN: https://docs.google.com/presentation/d/15Zwb53JcncHfEwHpnG_PoIbbzQ3GQi_cpujYw...
https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/BUILD.gn (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/BUILD.gn:675: rtc_source_set("fake_recording_device") { Additional arguments for modular targets: * GN will guard against including recording_device.h in places it's not needed or can be avoided by having to explicitly state 'recording_device' as a dependency. * Reduced compilation time. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.h (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.h:80: rtc::Optional<int> simulated_mic_kind; I suggest convert to FakeRecordingDevice::LevelToScalingMappingKind directly and store as that type. Can Optional<int> agc_mode be changed to an enum at the same time as well? https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:63: LOG(LS_VERBOSE) << "[fake rec device] Cannot undo analog level of 0."; On 2017/05/05 20:25:21, peah-webrtc wrote: > As this is only simulation/testing code, you should not include error handling. > Instead just DCHECK as that will simplify the code a lot. That is also how it is > handled elsewhere in audioproc_f. > > Silently (at least if you don't write out the verbose log-lines) will easily > cause this error to go unnoticed. I think a level of 0 can appear in real recordings. Should that be treated as an error (and crash in debug-mode with DCHECK)? WDYT? https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:38: enum class LevelToScalingMappingKind { On 2017/05/05 20:25:21, peah-webrtc wrote: > I think the mapping name is too specific. What if you want to simulate two > microphones that both have a linear mapping, but that differ in the way the > soft-saturation works. I'd propose to call this something more general, like > MicrophoneKind. +1 https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:52: void SimulateAnalogGain(std::vector<rtc::ArrayView<const float>> src, On 2017/05/05 20:25:21, peah-webrtc wrote: > Seeing how this is used, I don't see why there needs to be separate outputs and > inputs (now the same arrayview is used for input as for output). This is in > theory error-prone, if the implementation would rely on them being different. > I'd rather prefer to have one input/output variable as that is how it is used. +1 https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:62: int real_device_level = kRealDeviceLevelUnknown); Note that the comment about ordering and default param comment applies here as well.
https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/BUILD.gn (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/BUILD.gn:675: rtc_source_set("fake_recording_device") { On 2017/05/08 09:46:49, aleloi wrote: > On 2017/05/05 20:25:21, peah-webrtc wrote: > > Is it really worthwhile having a separate build target for the > > fake_recording_device? I'd suggest to put it in the audioproc_f build target > as: > > 1) It is always build together with that. > > 2) The build file becomes shorter. > > 3) The build file becomes more easy to read. > > > > WDYT? > > I don't agree with 3) in this case. Having a modular build shows what logical > sub-components a big target can be divided in, and shows dependencies between > sub-components. It's another clue for understanding the design of something. > There are additional benefits: > > * it would get simpler to use a 'fake_recording_device' for something else > * GN can help enforce design modularity checking that header includes are stated > as dependencies > > Link to 3 slides about build design of a (public) presentation on GN: > https://docs.google.com/presentation/d/15Zwb53JcncHfEwHpnG_PoIbbzQ3GQi_cpujYw... You definitely have a great point in that it is good in having a modular build. But to me going for separate build targets for each class is too extreme if the class is not used elsewhere. *To me this class seems very tied to how audioproc_f is constructed and I don't see how it can be used in any other of our simulators. What is your view on where it could be used elsewhere? *Regarding the modularity checking, there is a point, but I doubt we should do this for individual classes. Or should we be that extreme? *Regarding the compilation, that should only hold if fake_recording_device is an optional part in some builds of audioproc_f, but afaics it is always build in as a mandatory part, and will remain so. It could make sense in the linking phase, but for one class that should not make a difference, right? https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.h (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.h:80: rtc::Optional<int> simulated_mic_kind; On 2017/05/08 10:15:23, aleloi wrote: > I suggest convert to FakeRecordingDevice::LevelToScalingMappingKind directly and > store as that type. Can Optional<int> agc_mode be changed to an enum at the same > time as well? That makes sense, but the code storing the command-line flag will be a bit more messy. As it is now, this is done in the same manner for all parameters using the SetSettingIfSpecified method. To follow this pattern, a new version of SetSettingIfSpecified should be created. That sanity check is for the other parameters now done in PerformBasicParameterSanityChecks but it could for sure be done in SetSettingIfSpecified as well. I don't have a strong opinion, apart from that it should make the code easier to read and less error prone. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:63: LOG(LS_VERBOSE) << "[fake rec device] Cannot undo analog level of 0."; On 2017/05/08 10:15:23, aleloi wrote: > On 2017/05/05 20:25:21, peah-webrtc wrote: > > As this is only simulation/testing code, you should not include error > handling. > > Instead just DCHECK as that will simplify the code a lot. That is also how it > is > > handled elsewhere in audioproc_f. > > > > Silently (at least if you don't write out the verbose log-lines) will easily > > cause this error to go unnoticed. > > I think a level of 0 can appear in real recordings. Should that be treated as an > error (and crash in debug-mode with DCHECK)? WDYT? No, nothing that occurs in real recordings should be considered an error.
https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/BUILD.gn (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/BUILD.gn:675: rtc_source_set("fake_recording_device") { On 2017/05/08 11:41:33, peah-webrtc wrote: > On 2017/05/08 09:46:49, aleloi wrote: > > On 2017/05/05 20:25:21, peah-webrtc wrote: > > > Is it really worthwhile having a separate build target for the > > > fake_recording_device? I'd suggest to put it in the audioproc_f build target > > as: > > > 1) It is always build together with that. > > > 2) The build file becomes shorter. > > > 3) The build file becomes more easy to read. > > > > > > WDYT? > > > > I don't agree with 3) in this case. Having a modular build shows what logical > > sub-components a big target can be divided in, and shows dependencies between > > sub-components. It's another clue for understanding the design of something. > > There are additional benefits: > > > > * it would get simpler to use a 'fake_recording_device' for something else > > * GN can help enforce design modularity checking that header includes are > stated > > as dependencies > > > > Link to 3 slides about build design of a (public) presentation on GN: > > > https://docs.google.com/presentation/d/15Zwb53JcncHfEwHpnG_PoIbbzQ3GQi_cpujYw... > > You definitely have a great point in that it is good in having a modular build. > But to me going for separate build targets for each class is too extreme if the > class is not used elsewhere. > > *To me this class seems very tied to how audioproc_f is constructed and I don't > see how it can be used in any other of our simulators. What is your view on > where it could be used elsewhere? > > *Regarding the modularity checking, there is a point, but I doubt we should do > this for individual classes. Or should we be that extreme? > > *Regarding the compilation, that should only hold if fake_recording_device is an > optional part in some builds of audioproc_f, but afaics it is always build in as > a mandatory part, and will remain so. It could make sense in the linking phase, > but for one class that should not make a difference, right? Ok, I'm halfway convinced by your arguments :) Last feeble protest: suppose we develop realistic (and complex) simulations of real microphones that span over several files. Maybe we'll want to use them in higher-level unit- and regression tests. Then modularity would have been helpful. But maybe that's a problem for later. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.h (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.h:80: rtc::Optional<int> simulated_mic_kind; On 2017/05/08 11:41:33, peah-webrtc wrote: > On 2017/05/08 10:15:23, aleloi wrote: > > I suggest convert to FakeRecordingDevice::LevelToScalingMappingKind directly > and > > store as that type. Can Optional<int> agc_mode be changed to an enum at the > same > > time as well? > > That makes sense, but the code storing the command-line flag will be a bit more > messy. As it is now, this is done in the same manner for all parameters using > the SetSettingIfSpecified method. To follow this pattern, a new version of > SetSettingIfSpecified should be created. > > That sanity check is for the other parameters now done in > PerformBasicParameterSanityChecks but it could for sure be done in > SetSettingIfSpecified as well. > > I don't have a strong opinion, apart from that it should make the code easier to > read and less error prone. > Ok, I didn't look close enough.
https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/BUILD.gn (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/BUILD.gn:675: rtc_source_set("fake_recording_device") { On 2017/05/08 12:40:49, aleloi wrote: > On 2017/05/08 11:41:33, peah-webrtc wrote: > > On 2017/05/08 09:46:49, aleloi wrote: > > > On 2017/05/05 20:25:21, peah-webrtc wrote: > > > > Is it really worthwhile having a separate build target for the > > > > fake_recording_device? I'd suggest to put it in the audioproc_f build > target > > > as: > > > > 1) It is always build together with that. > > > > 2) The build file becomes shorter. > > > > 3) The build file becomes more easy to read. > > > > > > > > WDYT? > > > > > > I don't agree with 3) in this case. Having a modular build shows what > logical > > > sub-components a big target can be divided in, and shows dependencies > between > > > sub-components. It's another clue for understanding the design of something. > > > There are additional benefits: > > > > > > * it would get simpler to use a 'fake_recording_device' for something else > > > * GN can help enforce design modularity checking that header includes are > > stated > > > as dependencies > > > > > > Link to 3 slides about build design of a (public) presentation on GN: > > > > > > https://docs.google.com/presentation/d/15Zwb53JcncHfEwHpnG_PoIbbzQ3GQi_cpujYw... > > > > You definitely have a great point in that it is good in having a modular > build. > > But to me going for separate build targets for each class is too extreme if > the > > class is not used elsewhere. > > > > *To me this class seems very tied to how audioproc_f is constructed and I > don't > > see how it can be used in any other of our simulators. What is your view on > > where it could be used elsewhere? > > > > *Regarding the modularity checking, there is a point, but I doubt we should do > > this for individual classes. Or should we be that extreme? > > > > *Regarding the compilation, that should only hold if fake_recording_device is > an > > optional part in some builds of audioproc_f, but afaics it is always build in > as > > a mandatory part, and will remain so. It could make sense in the linking > phase, > > but for one class that should not make a difference, right? > > Ok, I'm halfway convinced by your arguments :) Last feeble protest: suppose we > develop realistic (and complex) simulations of real microphones that span over > several files. Maybe we'll want to use them in higher-level unit- and regression > tests. Then modularity would have been helpful. But maybe that's a problem for > later. In that case definitely modelarity would make even more sense. With the current class we have file-modularity but then it would make sense to have a wider-modularity than that. (In that case, I guess it would make sense to group the files into a separate subfolder, or should one have several buildtargets in one folder?) I think that if that case happens, we should change to have that then. I don't think it makes sense to have that yet with the code in this CL. But if there are concrete plans of adding that, it may for sure make sense to have that.
Hi, Sorry for the delay and thanks a lot for your comments. PTAL. Alessio https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/BUILD.gn (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/BUILD.gn:675: rtc_source_set("fake_recording_device") { On 2017/05/08 13:06:37, peah-webrtc wrote: > On 2017/05/08 12:40:49, aleloi wrote: > > On 2017/05/08 11:41:33, peah-webrtc wrote: > > > On 2017/05/08 09:46:49, aleloi wrote: > > > > On 2017/05/05 20:25:21, peah-webrtc wrote: > > > > > Is it really worthwhile having a separate build target for the > > > > > fake_recording_device? I'd suggest to put it in the audioproc_f build > > target > > > > as: > > > > > 1) It is always build together with that. > > > > > 2) The build file becomes shorter. > > > > > 3) The build file becomes more easy to read. > > > > > > > > > > WDYT? > > > > > > > > I don't agree with 3) in this case. Having a modular build shows what > > logical > > > > sub-components a big target can be divided in, and shows dependencies > > between > > > > sub-components. It's another clue for understanding the design of > something. > > > > There are additional benefits: > > > > > > > > * it would get simpler to use a 'fake_recording_device' for something else > > > > * GN can help enforce design modularity checking that header includes are > > > stated > > > > as dependencies > > > > > > > > Link to 3 slides about build design of a (public) presentation on GN: > > > > > > > > > > https://docs.google.com/presentation/d/15Zwb53JcncHfEwHpnG_PoIbbzQ3GQi_cpujYw... > > > > > > You definitely have a great point in that it is good in having a modular > > build. > > > But to me going for separate build targets for each class is too extreme if > > the > > > class is not used elsewhere. > > > > > > *To me this class seems very tied to how audioproc_f is constructed and I > > don't > > > see how it can be used in any other of our simulators. What is your view on > > > where it could be used elsewhere? > > > > > > *Regarding the modularity checking, there is a point, but I doubt we should > do > > > this for individual classes. Or should we be that extreme? > > > > > > *Regarding the compilation, that should only hold if fake_recording_device > is > > an > > > optional part in some builds of audioproc_f, but afaics it is always build > in > > as > > > a mandatory part, and will remain so. It could make sense in the linking > > phase, > > > but for one class that should not make a difference, right? > > > > Ok, I'm halfway convinced by your arguments :) Last feeble protest: suppose we > > develop realistic (and complex) simulations of real microphones that span over > > several files. Maybe we'll want to use them in higher-level unit- and > regression > > tests. Then modularity would have been helpful. But maybe that's a problem for > > later. > > In that case definitely modelarity would make even more sense. With the current > class we have file-modularity but then it would make sense to have a > wider-modularity than that. (In that case, I guess it would make sense to group > the files into a separate subfolder, or should one have several buildtargets in > one folder?) > > I think that if that case happens, we should change to have that then. I don't > think it makes sense to have that yet with the code in this CL. But if there are > concrete plans of adding that, it may for sure make sense to have that. Thanks for the discussion :) Let's leave as it is, also because in the future the FakeRecordingDevice class might be injected in a new AudioDevice class that simulates a dummy mic (needed to mock the actual mic). https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:71: LOG(LS_VERBOSE) << "Setting mic gain undo level from AEC dump"; On 2017/05/05 20:25:21, peah-webrtc wrote: > I think that the log line is a bit unclear. > What about > Simulating analog microphone gain I see your point, it comes out of the blue as it is. Just wanted to notify the user that the mic level undo op will take place because an AEC dump is given and the analog gain mic simulation mode is enabled. So either we remove it or we make it better. I'll try the second. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:176: LOG(LS_VERBOSE) << "AEC dump overriding AGC suggested level to " On 2017/05/05 20:25:21, peah-webrtc wrote: > (last_specified_microphone_level_ != msg.level()) will be true all the time if > you are simulating the microphone behavior. At that point of code, there's no simulation of the mic (else branch of "if (settings_.simulate_mic_gain) {"). > And according to previous > statements, I thing the value to have permanent log lines for this is very > limited. I therefore think you should remove lines 173-178. Log removed. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:124: std::vector<rtc::ArrayView<const float>> data_view; On 2017/05/05 20:25:21, peah-webrtc wrote: > The lines 122-129 are only there to conform to the API of SimulateAnalogGain. As > that API is only used here, I'd prefer to change SimulateAnalogGain to be able > to receive in_buf_ as it is, which would avoid the need for 122-129. Done. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:159: last_specified_microphone_level_ = ap_->gain_control()->stream_analog_level(); On 2017/05/05 20:25:21, peah-webrtc wrote: > With the changes done in this CL, I think last_specified_microphone_level_ > should be renamed as it is used a bit different. > What about last_specified_microphone_level_ -> new_analog_level_ ? > Done. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:168: << stats.residual_echo_likelihood << ", "; On 2017/05/05 20:25:21, peah-webrtc wrote: > Unrelated change? yup, sorry. I switched back. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.h (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.h:80: rtc::Optional<int> simulated_mic_kind; On 2017/05/08 12:40:49, aleloi wrote: > On 2017/05/08 11:41:33, peah-webrtc wrote: > > On 2017/05/08 10:15:23, aleloi wrote: > > > I suggest convert to FakeRecordingDevice::LevelToScalingMappingKind directly > > and > > > store as that type. Can Optional<int> agc_mode be changed to an enum at the > > same > > > time as well? > > > > That makes sense, but the code storing the command-line flag will be a bit > more > > messy. As it is now, this is done in the same manner for all parameters using > > the SetSettingIfSpecified method. To follow this pattern, a new version of > > SetSettingIfSpecified should be created. > > > > That sanity check is for the other parameters now done in > > PerformBasicParameterSanityChecks but it could for sure be done in > > SetSettingIfSpecified as well. > > > > I don't have a strong opinion, apart from that it should make the code easier > to > > read and less error prone. > > > > Ok, I didn't look close enough. If I understand well, it's fine as it is now since it's uniform with the rest of the params. Therefore, I didn't change anything. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:34: [scaling_factor](float x) { return scaling_factor * x; }); On 2017/05/05 20:25:21, peah-webrtc wrote: > This scaling could create absolute sample values above 1.0f. Saturation is > needed for this to be realistic. Good point. And we should apply saturation twice: 1. apply the undo level (if any) and use hard-clipping 2. apply the scaling factor and use hard-clipping If everything is done at once and there is an undo level causing saturation with a low scaling factor, we then miss the clipping in the signal. Do you agree? https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:52: return rtc::saturated_cast<int16_t>(scaling_factor * x); On 2017/05/05 20:25:21, peah-webrtc wrote: > This simulation of the microphone scaling is too simple, but could serve as a > starting point. However, as the rest of the code indicates that the microphone > type (kind) only affects the gain curve mapping I want to point this out already > now. > > For instance, to properly simulate a microphone (for the purpose of the AGC > development) you will almost immediately run into the need for simulating > soft-saturation, which is not handled in the current design. > > I would rather have prefferred a layout where there is a call to a separate > microphone simulation method for each of the microphone types supported. > > The same applies to the method on line 22. > Great point! Thanks. I redesigned what goes on behind the scenes. Now, we have callbacks responsible for changing a sample given the mic level to simulate and the optional real device level. These callbacks can also depend on the sample itself, which is the case for soft-clipping (e.g., https://ccrma.stanford.edu/~jos/pasp/Soft_Clipping.html). This design also allows to have memory. It is only sufficient to add private members in FakeRecordingDevice. Then the returned callbacks will have read-write access to such members (which can be used to implement memory). I checked this by adding an int member which is incremented every time AnalogGainSimulatorIdentityFloat is called. It worked (and obviously I removed that code). https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:63: LOG(LS_VERBOSE) << "[fake rec device] Cannot undo analog level of 0."; On 2017/05/08 11:41:33, peah-webrtc wrote: > On 2017/05/08 10:15:23, aleloi wrote: > > On 2017/05/05 20:25:21, peah-webrtc wrote: > > > As this is only simulation/testing code, you should not include error > > handling. > > > Instead just DCHECK as that will simplify the code a lot. That is also how > it > > is > > > handled elsewhere in audioproc_f. > > > > > > Silently (at least if you don't write out the verbose log-lines) will easily > > > cause this error to go unnoticed. > > > > I think a level of 0 can appear in real recordings. Should that be treated as > an > > error (and crash in debug-mode with DCHECK)? WDYT? > > No, nothing that occurs in real recordings should be considered an error. I'm a bit confused here. We shouldn't include error handling, but we should neither treat a 0 level as an error. Then, the correct way of dealing with 0 real device levels should be the original approach from Alex, that is logging and skipping the level undo part. WDYT? For now I went for the DCHECK way. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:23: // Class for simulating a microphone with analog gain. The class wraps a mapping On 2017/05/05 20:25:21, peah-webrtc wrote: > This seems out of date, could you please update. Done. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:29: // NotifyAudioDeviceLevel(|recorded level of real microphone|); // Optional! On 2017/05/05 20:25:21, peah-webrtc wrote: > NotifyAudioDeviceLevel Is no longer present, right? Done. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:38: enum class LevelToScalingMappingKind { On 2017/05/05 20:25:21, peah-webrtc wrote: > I think the mapping name is too specific. What if you want to simulate two > microphones that both have a linear mapping, but that differ in the way the > soft-saturation works. I'd propose to call this something more general, like > MicrophoneKind. Done. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:52: void SimulateAnalogGain(std::vector<rtc::ArrayView<const float>> src, On 2017/05/05 20:25:21, peah-webrtc wrote: > Seeing how this is used, I don't see why there needs to be separate outputs and > inputs (now the same arrayview is used for input as for output). This is in > theory error-prone, if the implementation would rely on them being different. > I'd rather prefer to have one input/output variable as that is how it is used. Done. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:53: std::vector<rtc::ArrayView<float>> dest, On 2017/05/05 20:25:21, peah-webrtc wrote: > The style guide says that input-only parameters should come first in the > parameter list. Done. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:55: int real_device_level = kRealDeviceLevelUnknown); On 2017/05/05 20:25:21, peah-webrtc wrote: > As real_device_level is actually always used whenever FakeAudioDevice is used, > there is no need for a default parameter value. Done. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:62: int real_device_level = kRealDeviceLevelUnknown); On 2017/05/08 10:15:23, aleloi wrote: > Note that the comment about ordering and default param comment applies here as > well. Done. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:66: // - gain curve (which depends on |mapping_kind_|), On 2017/05/05 20:25:21, peah-webrtc wrote: > The style guide says "Declaration comments describe use of the function (when it > is non-obvious); comments at the definition of a function describe operation." > As the method is private and the class is small, I'd prefer to move all comments > to the source file for this method. Done. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/wav_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/wav_based_simulator.cc:64: real_recording_device_level_ = FakeRecordingDevice::kRealDeviceLevelUnknown; On 2017/05/05 20:25:21, peah-webrtc wrote: > This seems like exactly what to use Optional for. I'd rather prefer > real_recording_device_level_ to be an rtc::Optional<int> than having to rely on > magic values for that. With that, these lines can be removed. Done.
Hi, Thanks for the new patch! I have added some comments. PTAL https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/BUILD.gn (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/BUILD.gn:675: rtc_source_set("fake_recording_device") { On 2017/05/16 08:53:02, AleBzk wrote: > On 2017/05/08 13:06:37, peah-webrtc wrote: > > On 2017/05/08 12:40:49, aleloi wrote: > > > On 2017/05/08 11:41:33, peah-webrtc wrote: > > > > On 2017/05/08 09:46:49, aleloi wrote: > > > > > On 2017/05/05 20:25:21, peah-webrtc wrote: > > > > > > Is it really worthwhile having a separate build target for the > > > > > > fake_recording_device? I'd suggest to put it in the audioproc_f build > > > target > > > > > as: > > > > > > 1) It is always build together with that. > > > > > > 2) The build file becomes shorter. > > > > > > 3) The build file becomes more easy to read. > > > > > > > > > > > > WDYT? > > > > > > > > > > I don't agree with 3) in this case. Having a modular build shows what > > > logical > > > > > sub-components a big target can be divided in, and shows dependencies > > > between > > > > > sub-components. It's another clue for understanding the design of > > something. > > > > > There are additional benefits: > > > > > > > > > > * it would get simpler to use a 'fake_recording_device' for something > else > > > > > * GN can help enforce design modularity checking that header includes > are > > > > stated > > > > > as dependencies > > > > > > > > > > Link to 3 slides about build design of a (public) presentation on GN: > > > > > > > > > > > > > > > https://docs.google.com/presentation/d/15Zwb53JcncHfEwHpnG_PoIbbzQ3GQi_cpujYw... > > > > > > > > You definitely have a great point in that it is good in having a modular > > > build. > > > > But to me going for separate build targets for each class is too extreme > if > > > the > > > > class is not used elsewhere. > > > > > > > > *To me this class seems very tied to how audioproc_f is constructed and I > > > don't > > > > see how it can be used in any other of our simulators. What is your view > on > > > > where it could be used elsewhere? > > > > > > > > *Regarding the modularity checking, there is a point, but I doubt we > should > > do > > > > this for individual classes. Or should we be that extreme? > > > > > > > > *Regarding the compilation, that should only hold if fake_recording_device > > is > > > an > > > > optional part in some builds of audioproc_f, but afaics it is always build > > in > > > as > > > > a mandatory part, and will remain so. It could make sense in the linking > > > phase, > > > > but for one class that should not make a difference, right? > > > > > > Ok, I'm halfway convinced by your arguments :) Last feeble protest: suppose > we > > > develop realistic (and complex) simulations of real microphones that span > over > > > several files. Maybe we'll want to use them in higher-level unit- and > > regression > > > tests. Then modularity would have been helpful. But maybe that's a problem > for > > > later. > > > > In that case definitely modelarity would make even more sense. With the > current > > class we have file-modularity but then it would make sense to have a > > wider-modularity than that. (In that case, I guess it would make sense to > group > > the files into a separate subfolder, or should one have several buildtargets > in > > one folder?) > > > > I think that if that case happens, we should change to have that then. I don't > > think it makes sense to have that yet with the code in this CL. But if there > are > > concrete plans of adding that, it may for sure make sense to have that. > > Thanks for the discussion :) Let's leave as it is, also because in the future > the FakeRecordingDevice class might be injected in a new AudioDevice class that > simulates a dummy mic (needed to mock the actual mic). I'm not convinced with that usecase:-). Could you please describe this AudioDevice class? With the code that is now, I don't see a clear need/use for such a class and I see no need for that for the purpose of the AGC2 work. I'd prefer to leave this in the standard build target for audioproc_f until that change is made. The main reasons for that are: -I strongly doubt we want to use build file targets that are specific for classes if there is no special requirement for that. -There is no case when we would like to build audioproc_f without this target. -I cannot foresee any use of this class in any other build target. -The audioproc_f target is so small anyway. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:34: [scaling_factor](float x) { return scaling_factor * x; }); On 2017/05/16 08:53:03, AleBzk wrote: > On 2017/05/05 20:25:21, peah-webrtc wrote: > > This scaling could create absolute sample values above 1.0f. Saturation is > > needed for this to be realistic. > > Good point. And we should apply saturation twice: > 1. apply the undo level (if any) and use hard-clipping > 2. apply the scaling factor and use hard-clipping > > If everything is done at once and there is an undo level causing saturation with > a low scaling factor, we then miss the clipping in the signal. > > Do you agree? I don't see why we need the hard-clipping in step 1. https://codereview.webrtc.org/2834643002/diff/120001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:63: LOG(LS_VERBOSE) << "[fake rec device] Cannot undo analog level of 0."; On 2017/05/16 08:53:03, AleBzk wrote: > On 2017/05/08 11:41:33, peah-webrtc wrote: > > On 2017/05/08 10:15:23, aleloi wrote: > > > On 2017/05/05 20:25:21, peah-webrtc wrote: > > > > As this is only simulation/testing code, you should not include error > > > handling. > > > > Instead just DCHECK as that will simplify the code a lot. That is also how > > it > > > is > > > > handled elsewhere in audioproc_f. > > > > > > > > Silently (at least if you don't write out the verbose log-lines) will > easily > > > > cause this error to go unnoticed. > > > > > > I think a level of 0 can appear in real recordings. Should that be treated > as > > an > > > error (and crash in debug-mode with DCHECK)? WDYT? > > > > No, nothing that occurs in real recordings should be considered an error. > > I'm a bit confused here. > We shouldn't include error handling, but we should neither treat a 0 level as an > error. > Then, the correct way of dealing with 0 real device levels should be the > original approach from Alex, that is logging and skipping the level undo part. > WDYT? > > For now I went for the DCHECK way. The action to take there is totally dependent on the simulation of the microphone. My point was mainly that it does not make sense to consider something that is in a real device an error, as that in my mind rather means that the microphone simulation is wrong (what is more correct, the reality or the simulation ;-) ). If you insist, I'm fine with a DCHECK for now, as this is not yet really used, but once you do this properly the DCHECK does not make sense at all. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:172: *real_recording_device_level_ = msg.level(); Why do you need to store msg.level(); in different values when settings_.simulate_mic_gain is, and is not, set? I'd prefer that you store it in one value, and then have a corresponding if-statement (if needed) in audio_processing_simulator. As it is now, I think it is harder than necessary to follow the code. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:87: new_analog_level_(settings.initial_mic_gain), There is a naming confusion between level and gain which I think could be more clear.Here new_analog_level_ is initialized to initial_mic_gain. I think I prefer level over gain as that is the naming in stream_analog_level() , WDYT? https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:122: RTC_DCHECK_EQ(in_config_.num_channels(), in_buf_->num_channels()); Why are these DCHECK-s needed here now? There should be nothing new that is changed in this CL that requires that, right? If the DCHECKS is something that missed to be checked I think that should instead happen when in_config_ and in_buf_ are created. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:129: // Notify the mic gain level to AGC. Both using gain and level here sounds like duplicate terminology. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:131: ap_->gain_control()->set_stream_analog_level(new_analog_level_)); This is not correct I think. There is nothing in the usage of APM that ensures that the actually applied analog level is what is reported was applied. My suggestion is to make new_analog_level_ part of fake_recording device_ (but with a new name) and let this method ask fake_recording device_ for what level is applied. E.g ap_->gain_control()->set_stream_analog_level(settings_.simulate_mic_gain ? fake_recording device_.GetLevel() : real_recording_device_level_); ap_->ProcessStream(...); int level = ap_->gain_control()->stream_analog_level(); if (settings_.simulate_mic_gain) { fake_recording device_.GetLevel(level); } https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:147: // Store the mic gain level suggested by AGC if required. Both using gain and level here sounds like duplicate terminology. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:23: simulator_callback_int16_ = AnalogGainSimulatorIdentityInt16(); These are not used as callbacks, so I don't think that should be part of the name. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:28: simulator_callback_int16_ = AnalogGainSimulatorLinearInt16(); I would rather have an interface with different implementations of the analog gains than doing it like this with method pointers (I know it sounds like the opposite to what I stated before but this is a bit different). For this to work the callbacks need to have access to the full scope of the FakeRecordingDevice class and I think the resulting code becomes a bit more complicated than it has to. For instance, -you need to declare SimulatorCallbackInt16 in the header, as well as the variables of SimulatorCallbackInt16. -The std::transform calls contain lambdas with full object scope that call methods using method pointers. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:42: std::transform( I think std::for_each is better suited here. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:45: return simulator_callback_float_(x, level, real_device_level); As you stated offline, this will not scale with memory in the microphone. But lets keep it for now until we know if we need memory. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:50: void FakeRecordingDevice::SimulateAnalogGain( Why is this needed as a separate method? https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:65: std::transform( std::for_each https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:120: kSampleMinInt16 : (sample > kSampleMaxInt16 ? kSampleMaxInt16 : sample); Why do you clip the sample value to +-255? https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:120: kSampleMinInt16 : (sample > kSampleMaxInt16 ? kSampleMaxInt16 : sample); Use std::min/std::max instead https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:124: return sample < kSampleMinFloat ? Use std::min/std::max instead https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:24: I think it would make sense to use the test namespace, right? https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:30: const int16_t kSampleMinInt16 = 255; These constants are not used in the header file. Please put them into the anonymous namespace in the source file. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:57: // Simulates the analog gain on an std::vector<rtc::ArrayView<float>> buffer. The buffer type is stated in the method header: Simulates the analog gain on an std::vector<rtc::ArrayView<float>> buffer. -> Simulates the analog gain on the samples in buffer. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:60: // nullptr. real_device_level is not a pointer but an optional so you cannot set it to nullptr. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:62: std::vector<rtc::ArrayView<float>> buffer); Why do you need 3 methods for SimulateAnalogGain? I can only see 2 being used in the code.
Patchset #7 (id:160001) has been deleted
Thank a lot for the comments! I think there is a point we should discuss offline (suggested in the comments). Alessio https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/BUILD.gn (right): https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/BUILD.gn:684: > I'm not convinced with that usecase:-). > Could you please describe this AudioDevice class? With the code that is now, I > don't see a clear need/use for such a class and I see no need for that for the > purpose of the AGC2 work. I spoke to Alex. Maybe it's too early to consider the use case I had in mind. So just ignore this point :) > I'd prefer to leave this in the standard build target for audioproc_f until that > change is made. The main reasons for that are: > -I strongly doubt we want to use build file targets that are specific for > classes if there is no special requirement for that. > -There is no case when we would like to build audioproc_f without this target. > -I cannot foresee any use of this class in any other build target. > -The audioproc_f target is so small anyway. Well, I've tried to put fake rec device into audioproc_f, but then we have another problem. fake_recording_device_unittest.cc, currently belonging to the "audio_processing_unittests" target, depends on fake_recording_device.h/.cc and fake_rec_device_*.h/.cc. If we remove "fake_recording_device" from the build file, then we have duplication in it because we need to add the same files in the deps part of "audio_processing_unittests". IMHO, I prefer as it is now rather than duplicating the dependencies. Alternatively, we can create a target for audioproc_f unit tests, but if I think of this solution, I find it unnecessarily complex. WDYT? https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:172: *real_recording_device_level_ = msg.level(); On 2017/05/16 12:19:35, peah-webrtc wrote: > Why do you need to store msg.level(); in different values when > settings_.simulate_mic_gain is, and is not, set? Because the level is used for different purposes. Case 1. When simulating the microphone gain and an AEC dump is provided, we first need to virtually restore the unmodified mic level - i.e., a fixed level that would have been observed, if AGC had not modified the mic gain. Note that we are undoing assuming that the user does not change the mic level and that the unmodified mic gain is the maximum. Then the AGC instance in APM will notify a suggested analog level, which will be used by the gain simulator. Case 2. When the mic gain simulation is disabled, then we should inform AGC with the level that has been used during the recorded call. It's done to call gain_control.set_stream_analog_level(), ProcessStream(), gain_control.stream_analog_level() in the correct order to reproduce the pipeline side effects. > I'd prefer that you store it in one value, and then have a corresponding > if-statement (if needed) in audio_processing_simulator. As it is now, I think it > is harder than necessary to follow the code. This would cause confusion and in the simulated mic gain case using one variable is not even possible (otherwise AGC will overwrite the undo level). Let's discuss this offline if needed. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:87: new_analog_level_(settings.initial_mic_gain), On 2017/05/16 12:19:35, peah-webrtc wrote: > There is a naming confusion between level and gain which I think could be more > clear.Here new_analog_level_ is initialized to initial_mic_gain. Right, i should have been consistent. > > I think I prefer level over gain as that is the naming in stream_analog_level() > , WDYT? Yup! https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:122: RTC_DCHECK_EQ(in_config_.num_channels(), in_buf_->num_channels()); On 2017/05/16 12:19:35, peah-webrtc wrote: > Why are these DCHECK-s needed here now? There should be nothing new that is > changed in this CL that requires that, right? > > If the DCHECKS is something that missed to be checked I think that should > instead happen when in_config_ and in_buf_ are created. Sorry, I should have added a comment for you here instead of a TODO. If you see what's changed, you can notice that I call num_channels() and num_frames() on in_buf_, whereas before they were called on in_config_. Since in_buf_ is passed into SimulateAnalogGain(), I just use it to get the number of channels and frames. Anyways, I tested that part of the code, so I remove the DCHECKs. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:129: // Notify the mic gain level to AGC. On 2017/05/16 12:19:35, peah-webrtc wrote: > Both using gain and level here sounds like duplicate terminology. Done. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:131: ap_->gain_control()->set_stream_analog_level(new_analog_level_)); On 2017/05/16 12:19:35, peah-webrtc wrote: > This is not correct I think. There is nothing in the usage of APM that ensures > that the actually applied analog level is what is reported was applied. I completely agree with this point. In fact, a user could for instance manually change the input mic gain, or the hardware/driver can override the set value. > > My suggestion is to make new_analog_level_ part of fake_recording device_ (but > with a new name) and let this method ask fake_recording device_ for what level > is applied. > > E.g > ap_->gain_control()->set_stream_analog_level(settings_.simulate_mic_gain ? > fake_recording device_.GetLevel() : real_recording_device_level_); > > ap_->ProcessStream(...); > > int level = ap_->gain_control()->stream_analog_level(); > if (settings_.simulate_mic_gain) { > fake_recording device_.GetLevel(level); > } > > Maybe I'm wrong, but what you suggest looks exactly like it was when I first submitted this CL. Look at https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... around line 115. There was a fake mic class storing the mic value. Maybe at that time, some details were missing or unclear. I rewrote FakeRecordingDevice, which holds the current mic level replacing the role of |new_analog_level_|. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:147: // Store the mic gain level suggested by AGC if required. On 2017/05/16 12:19:35, peah-webrtc wrote: > Both using gain and level here sounds like duplicate terminology. Right, thanks for the comment. I only left level. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:23: simulator_callback_int16_ = AnalogGainSimulatorIdentityInt16(); On 2017/05/16 12:19:36, peah-webrtc wrote: > These are not used as callbacks, so I don't think that should be part of the > name. Acknowledged. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:28: simulator_callback_int16_ = AnalogGainSimulatorLinearInt16(); On 2017/05/16 12:19:36, peah-webrtc wrote: > I would rather have an interface with different implementations of the analog > gains than doing it like this with method pointers (I know it sounds like the > opposite to what I stated before but this is a bit different). For this to work > the callbacks need to have access to the full scope of the FakeRecordingDevice > class and I think the resulting code becomes a bit more complicated than it has > to. > > For instance, > -you need to declare SimulatorCallbackInt16 in the header, as well as the > variables of SimulatorCallbackInt16. > -The std::transform calls contain lambdas with full object scope that call > methods using method pointers. Perfect. Let's then go for interface + implementations :) https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:42: std::transform( On 2017/05/16 12:19:35, peah-webrtc wrote: > I think std::for_each is better suited here. Acknowledged. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:45: return simulator_callback_float_(x, level, real_device_level); On 2017/05/16 12:19:36, peah-webrtc wrote: > As you stated offline, this will not scale with memory in the microphone. But > lets keep it for now until we know if we need memory. Acknowledged. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:50: void FakeRecordingDevice::SimulateAnalogGain( On 2017/05/16 12:19:36, peah-webrtc wrote: > Why is this needed as a separate method? Please, see my answer to a related question of yours at line 63 in https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:65: std::transform( On 2017/05/16 12:19:36, peah-webrtc wrote: > std::for_each Acknowledged. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:120: kSampleMinInt16 : (sample > kSampleMaxInt16 ? kSampleMaxInt16 : sample); On 2017/05/16 12:19:36, peah-webrtc wrote: > Why do you clip the sample value to +-255? Right, I got confused with the mic level. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:120: kSampleMinInt16 : (sample > kSampleMaxInt16 ? kSampleMaxInt16 : sample); On 2017/05/16 12:19:35, peah-webrtc wrote: > Use std::min/std::max instead Done. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:124: return sample < kSampleMinFloat ? On 2017/05/16 12:19:36, peah-webrtc wrote: > Use std::min/std::max instead Done. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:24: On 2017/05/16 12:19:36, peah-webrtc wrote: > I think it would make sense to use the test namespace, right? Done. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:30: const int16_t kSampleMinInt16 = 255; On 2017/05/16 12:19:36, peah-webrtc wrote: > These constants are not used in the header file. Please put them into the > anonymous namespace in the source file. Done. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:57: // Simulates the analog gain on an std::vector<rtc::ArrayView<float>> buffer. On 2017/05/16 12:19:36, peah-webrtc wrote: > The buffer type is stated in the method header: > > Simulates the analog gain on an std::vector<rtc::ArrayView<float>> buffer. -> > Simulates the analog gain on the samples in buffer. I would leave it as it is to distinguish the 3 methods by comment. WDYT? https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:60: // nullptr. On 2017/05/16 12:19:36, peah-webrtc wrote: > real_device_level is not a pointer but an optional so you cannot set it to > nullptr. Comment changed. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:62: std::vector<rtc::ArrayView<float>> buffer); On 2017/05/16 12:19:36, peah-webrtc wrote: > Why do you need 3 methods for SimulateAnalogGain? I can only see 2 being used in > the code. I left SimulateAnalogGain(int, rtc::Optional<int>, std::vector<rtc::ArrayView<float>>) because it's used in the unit tests and from SimulateAnalogGain(int, rtc::Optional<int>, ChannelBuffer<float>* buffer), which is just an adapter in order to call the first method. Also, I thought it was good to leave an interface supporting ArrayView objects for future use. WDYT? https://codereview.webrtc.org/2834643002/diff/180001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.h (right): https://codereview.webrtc.org/2834643002/diff/180001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.h:172: std::unique_ptr<FakeRecordingDevice> fake_recording_device_; |fake_recording_device_| holds both the mic level and the (optional) undo level. |real_recording_device_level_| had a misleading name, so in FakeRecordingDevice it is now named |undo_mic_level_|. Let's discuss this point offline maybe. https://codereview.webrtc.org/2834643002/diff/180001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_rec_device_linear.cc (right): https://codereview.webrtc.org/2834643002/diff/180001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_rec_device_linear.cc:35: } Let's continue the discussion about clipping between the undo operation and the gain simulation here. > > > This scaling could create absolute sample values above 1.0f. Saturation is > > > needed for this to be realistic. > > > > Good point. And we should apply saturation twice: > > 1. apply the undo level (if any) and use hard-clipping > > 2. apply the scaling factor and use hard-clipping > > > > If everything is done at once and there is an undo level causing saturation > with > > a low scaling factor, we then miss the clipping in the signal. > > > > Do you agree? > > I don't see why we need the hard-clipping in step 1. To check if this way is correct, imagine the following: - we have an AECdump and we want to simulate AGC, so we need to virtually restore the unmodified mic level, - at some point of the call, the mic gain is low (maybe because someone was shouting or was too close to the mic) - the low gain was probably selected by AGC (during the call) to avoid saturation - we virtually restore the unmodified mic level first, since we want to simulate what we would have seen if the mic gain had been higher - so, back to the shouting/too close case, a higher gain could have turned into saturation Am I elaborating too much? is this reasoning correct? And even if it makes sense and it's correct, do we want this?
Hi, Thanks for the new patch! I have some more comments. PTAL https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/BUILD.gn (right): https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/BUILD.gn:684: On 2017/05/17 11:52:22, AleBzk wrote: > > I'm not convinced with that usecase:-). > > Could you please describe this AudioDevice class? With the code that is now, I > > don't see a clear need/use for such a class and I see no need for that for the > > purpose of the AGC2 work. > > I spoke to Alex. Maybe it's too early to consider the use case I had in mind. > So just ignore this point :) > > > I'd prefer to leave this in the standard build target for audioproc_f until > that > > change is made. The main reasons for that are: > > -I strongly doubt we want to use build file targets that are specific for > > classes if there is no special requirement for that. > > -There is no case when we would like to build audioproc_f without this target. > > -I cannot foresee any use of this class in any other build target. > > -The audioproc_f target is so small anyway. > > Well, I've tried to put fake rec device into audioproc_f, but then we have > another problem. > fake_recording_device_unittest.cc, currently belonging to the > "audio_processing_unittests" target, depends on fake_recording_device.h/.cc and > fake_rec_device_*.h/.cc. If we remove "fake_recording_device" from the build > file, then we have duplication in it because we need to add the same files in > the deps part of "audio_processing_unittests". > > IMHO, I prefer as it is now rather than duplicating the dependencies. > Alternatively, we can create a target for audioproc_f unit tests, but if I think > of this solution, I find it unnecessarily complex. > > WDYT? Good point! Then we need to do this. But then I'd actually rather put this as a more generic target with a more generic name. As it is now, it is very specific to the fake_recording_device and if you would add another class to that target it would not fit the name. I.e., go for a separate target but I think you should change the name. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:172: *real_recording_device_level_ = msg.level(); On 2017/05/17 11:52:23, AleBzk wrote: > On 2017/05/16 12:19:35, peah-webrtc wrote: > > Why do you need to store msg.level(); in different values when > > settings_.simulate_mic_gain is, and is not, set? > > Because the level is used for different purposes. > > Case 1. > When simulating the microphone gain and an AEC dump is provided, we first need > to virtually restore the unmodified mic level - i.e., a fixed level that would > have been observed, if AGC had not modified the mic gain. Note that we are > undoing assuming that the user does not change the mic level and that the > unmodified mic gain is the maximum. > Then the AGC instance in APM will notify a suggested analog level, which will be > used by the gain simulator. > > Case 2. > When the mic gain simulation is disabled, then we should inform AGC with the > level that has been used during the recorded call. It's done to call > gain_control.set_stream_analog_level(), ProcessStream(), > gain_control.stream_analog_level() in the correct order to reproduce the > pipeline side effects. > > > I'd prefer that you store it in one value, and then have a corresponding > > if-statement (if needed) in audio_processing_simulator. As it is now, I think > it > > is harder than necessary to follow the code. > > This would cause confusion and in the simulated mic gain case using one variable > is not even possible (otherwise AGC will overwrite the undo level). > > Let's discuss this offline if needed. I agree that the usage is different, what I propose is to put that logic into audio_processing_simulator.cc instead. This is the only place where msg.level() is read, and now it is, based on settings_.simulate_mic_gain, stored in either of two variables. Then in the method ProcessStream in audio_processing_simulator.cc we again have a conditional settings_.simulate_mic_gain and do specific processing in which both of the above variables are used. What about storing the msg.level() into one variable, and then move the if-statement here to ProcessStream? Then all of this logic is moved there. As it is now, the roles of new_analog_level_ and real_recording_device_level is quite confusing to me, but that will definitely be easier to follow if they are set in the same place. Note that both new_analog_level_ and real_recording_device_level_ are only used here and inside ProcessStream. The same holds for settings_.simulate_mic_gain. Or do you have a specific reason why this assignment needs to be in this method? https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:131: ap_->gain_control()->set_stream_analog_level(new_analog_level_)); On 2017/05/17 11:52:23, AleBzk wrote: > On 2017/05/16 12:19:35, peah-webrtc wrote: > > This is not correct I think. There is nothing in the usage of APM that ensures > > that the actually applied analog level is what is reported was applied. > > I completely agree with this point. > In fact, a user could for instance manually change the input mic gain, or the > hardware/driver can override the set value. > > > > > My suggestion is to make new_analog_level_ part of fake_recording device_ (but > > with a new name) and let this method ask fake_recording device_ for what level > > is applied. > > > > E.g > > ap_->gain_control()->set_stream_analog_level(settings_.simulate_mic_gain ? > > fake_recording device_.GetLevel() : real_recording_device_level_); > > > > ap_->ProcessStream(...); > > > > int level = ap_->gain_control()->stream_analog_level(); > > if (settings_.simulate_mic_gain) { > > fake_recording device_.GetLevel(level); > > } > > > > > > Maybe I'm wrong, but what you suggest looks exactly like it was when I first > submitted this CL. > Look at > https://codereview.webrtc.org/2834643002/diff/100001/webrtc/modules/audio_pro... > around line 115. There was a fake mic class storing the mic value. Maybe at that > time, some details were missing or unclear. > > I rewrote FakeRecordingDevice, which holds the current mic level replacing the > role of |new_analog_level_|. What I stated was that "I'd prefer a decoupling between the stored stream analog level and the fake audio device for the case when there is no simulation of the analog gain..." I think that still makes sense, and it is like that for the suggested construct ap_->gain_control()->set_stream_analog_level(settings_.simulate_mic_gain ? fake_recording device_.GetLevel() : real_recording_device_level_); I see, however, that I probably mixed up the real_recording_device_level_ and new_analog_level variables but, done right, I actually think that this is according to the referred statement. Or maybe I missed something? https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:57: // Simulates the analog gain on an std::vector<rtc::ArrayView<float>> buffer. On 2017/05/17 11:52:23, AleBzk wrote: > On 2017/05/16 12:19:36, peah-webrtc wrote: > > The buffer type is stated in the method header: > > > > Simulates the analog gain on an std::vector<rtc::ArrayView<float>> buffer. -> > > Simulates the analog gain on the samples in buffer. > > I would leave it as it is to distinguish the 3 methods by comment. > WDYT? Lets discuss the three methods first. In general, I think it is of great value to keep the comments as short as possible. Since this type is quite verbose, and furthermore also declared 5 lines below the comment, I'd prefer not to include it in the comment as it makes the comment longer and is redundant. https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:62: std::vector<rtc::ArrayView<float>> buffer); On 2017/05/17 11:52:24, AleBzk wrote: > On 2017/05/16 12:19:36, peah-webrtc wrote: > > Why do you need 3 methods for SimulateAnalogGain? I can only see 2 being used > in > > the code. > > I left SimulateAnalogGain(int, rtc::Optional<int>, > std::vector<rtc::ArrayView<float>>) because it's used in the unit tests and from > SimulateAnalogGain(int, rtc::Optional<int>, ChannelBuffer<float>* buffer), which > is just an adapter in order to call the first method. > > Also, I thought it was good to leave an interface supporting ArrayView objects > for future use. > > WDYT? I think it always makes sense to keep the API surface as small as possible, and it is better to let the unittest test the methods that are used rather than this one which is not. I don't think it is good to add api calls for potential future use before they are implemented. From my experience in most scenarios you end up in either a) The API is never used, which means that this just becomes code bloat which makes the code harder to read. b) The API turned out not to be exactly what you wanted, which means that you anyway need to rewrite the code. Do you have any specific use of that API interface? Otherwise I think you should wait with adding it until you see the need.
Hi again, PTAL. I didn't directly reply to the past comments to avoid the risk of reviewing outdated code from previous patch sets. I checked the draft below and the links are all correct. Briefly, I addressed everything, the only open point is sth raised by Per (see https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... - I moved the discussion into the most recent patch set). Cheers, Alessio https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/BUILD.gn (right): https://codereview.webrtc.org/2834643002/diff/140001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/BUILD.gn:684: On 2017/05/17 14:52:12, peah-webrtc wrote: > On 2017/05/17 11:52:22, AleBzk wrote: > > > I'm not convinced with that usecase:-). > > > Could you please describe this AudioDevice class? With the code that is now, > I > > > don't see a clear need/use for such a class and I see no need for that for > the > > > purpose of the AGC2 work. > > > > I spoke to Alex. Maybe it's too early to consider the use case I had in mind. > > So just ignore this point :) > > > > > I'd prefer to leave this in the standard build target for audioproc_f until > > that > > > change is made. The main reasons for that are: > > > -I strongly doubt we want to use build file targets that are specific for > > > classes if there is no special requirement for that. > > > -There is no case when we would like to build audioproc_f without this > target. > > > -I cannot foresee any use of this class in any other build target. > > > -The audioproc_f target is so small anyway. > > > > Well, I've tried to put fake rec device into audioproc_f, but then we have > > another problem. > > fake_recording_device_unittest.cc, currently belonging to the > > "audio_processing_unittests" target, depends on fake_recording_device.h/.cc > and > > fake_rec_device_*.h/.cc. If we remove "fake_recording_device" from the build > > file, then we have duplication in it because we need to add the same files in > > the deps part of "audio_processing_unittests". > > > > IMHO, I prefer as it is now rather than duplicating the dependencies. > > Alternatively, we can create a target for audioproc_f unit tests, but if I > think > > of this solution, I find it unnecessarily complex. > > > > WDYT? > > Good point! Then we need to do this. But then I'd actually rather put this as a > more generic target with a more generic name. As it is now, it is very specific > to the fake_recording_device and if you would add another class to that target > it would not fit the name. > > I.e., go for a separate target but I think you should change the name. Right, thanks. fake_recording_device -> analog_mic_simulation https://codereview.webrtc.org/2834643002/diff/180001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/180001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:69: void SimulateAnalogGain(std::vector<rtc::ArrayView<float>> buffer); @Per: thanks for your comments on keeping the API as essential as possible; I removed the methods that was only used in the unit tests and adapted the UTs accordingly. https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:180: fake_recording_device_->set_mic_level(msg.level()); @Per: you asked to move this to the parent class and store the level from the AEC dump into a local variable. First of all, note that I'm not using local variables anymore but directly the fake recording device (which is a protected member of the parent class). Last time you reviewed this CL, you commented on an old PS using variables. Back to your main point. If we move this logic to the parent class, then the latter needs sth like "if (aec dump case)", which I don't like much because for the AEC dump case we have a dedicated implementation of the abstract class. In this way instead, the parent class just does the stuff common to WAV and AECdump, whereas the additional features specific of each case are done in WavBasedSimulator and AecDumpBasedSimulator respectively. Finally, note that msg.level() has two different applications depending on whether the analog mic is simulated. To me, it has to be AecDumpBasedSimulator that knows what to change in the simulation. However, I understand that to me everything looks clear, but it could be different for the reader. So, if you still see this confusing I propose the following changes: - in the parent class AudioProcessingSimulator, I add "rtc::Optional<int> aec_dump_mic_level_" as protected member - AecDumpBasedSimulator always sets its value in there - AudioProcessingSimulator uses aec_dump_mic_level_ if available - since WavBasedSimulator doesn't set it, there's no need of "if (aec dump)" WDYT?
https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:180: fake_recording_device_->set_mic_level(msg.level()); On 2017/05/23 13:56:41, AleBzk wrote: > @Per: you asked to move this to the parent class and store the level from the > AEC dump into a local variable. > > First of all, note that I'm not using local variables anymore but directly the > fake recording device (which is a protected member of the parent class). Last > time you reviewed this CL, you commented on an old PS using variables. > > Back to your main point. If we move this logic to the parent class, then the > latter needs sth like "if (aec dump case)", which I don't like much because for > the AEC dump case we have a dedicated implementation of the abstract class. > > In this way instead, the parent class just does the stuff common to WAV and > AECdump, whereas the additional features specific of each case are done in > WavBasedSimulator and AecDumpBasedSimulator respectively. > > Finally, note that msg.level() has two different applications depending on > whether the analog mic is simulated. To me, it has to be AecDumpBasedSimulator > that knows what to change in the simulation. > > However, I understand that to me everything looks clear, but it could be > different for the reader. So, if you still see this confusing I propose the > following changes: > - in the parent class AudioProcessingSimulator, I add "rtc::Optional<int> > aec_dump_mic_level_" as protected member > - AecDumpBasedSimulator always sets its value in there > - AudioProcessingSimulator uses aec_dump_mic_level_ if available > - since WavBasedSimulator doesn't set it, there's no need of "if (aec dump)" > > WDYT? What I don't like with this approach is that it gives the impression that the fake_recording_device actually modifies the level in audio_processing_simulator.cc even though it is specified not to simulate the mic gain. Furthermore, in order to see this that this really is not the case I'd need to dig fairly deep into fake_recording_device. I see your point about that the AudioProcessingSimulator code should not need to know whether you are using a wav or an aecdump based simulation. It is definitely valid but I don't think there is anyway around that in this case. The latest patch actually also has such a dependency but in a hidden way as the fake_recording_device knows what kind of simulation is running. I'd rather make that more explicit. What about something similar to the following code: void AecDumpBasedSimulator::PrepareProcessStreamCall() { ... level_ = rtc::Optional<int>(msg.level()); ... } class AudioProcessingSimulator { ... protected: rtc::Optional<int> level_; ... }; void AudioProcessingSimulator::ProcessStream(bool fixed_interface) { // Optionally use the fake recording device to simulate analog gain. RTC_DCHECK(fake_recording_device_); if (settings_.simulate_mic_gain) { if (fixed_interface) { fake_recording_device_->SimulateAnalogGain(level_, &fwd_frame_); } else { fake_recording_device_->SimulateAnalogGain(level_, in_buf_.get()); } ap_->gain_control()->set_stream_analog_level(fake_recording_device_->mic_level() : level_)); } else { // Use the level 100 as default if no other level has been externally specified. ap_->gain_control()->set_stream_analog_level(level_ ? * level_ : 100)); } ... // Process the current audio frame. ... // Store the mic level suggested by AGC if required. int new_level = ap_->gain_control()->stream_analog_level(); if (settings_.simulate_mic_gain) { fake_recording_device_->set_mic_level(new_level); } .. } https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_rec_device_identity.h (right): https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_rec_device_identity.h:11: #ifndef WEBRTC_MODULES_AUDIO_PROCESSING_TEST_FAKE_REC_DEVICE_IDENTITY_H_ Sorry, but this is not what I meant. Please put these implementations in the fake_recording_device.cc file instead. This approach adds a lot of boilerplate code for something which is quite small. What I meant was that you instead of the method pointers should use interfaces and only implement them locally. When you move them into separate files there is a lot of boilerplate code, about 60 lines for each. Previously each method for this was 5 lines long and with local interfaces, you should have something fairly similar. I think that until we really put something substantial in these implementations which show that they become so large as to warrant a separate file, we should simply put the implementations inside the fake_recording_device.cc file. https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:46: class FakeRecordingDevice { I really think we should put as little implementation as possible into this class now, i.e., make it as small and simple as possible. The reason for that is that we don't really know how to simulate this. I'm open to having one or two simple default implementations just to show how it is supposed to work but I don't think it makes sense to do advanced implementations in a specific way. For instance, it is fairly clear to me that the current approach won't be sufficient to simulate a microphone with a noise suppressor (which we have on Macbooks), beamformers, or a microphone with any type of memory in its clipping behavior. Until we can rule out that we don't need such an implementation of the simulation I don't think we should have a code design that enforces that, unless that code is very simple. Why not do this as simple as possible, and not more than that? It will be much easier to discuss the implementation once we really know what is needed to simulate the device.
alessiob@webrtc.org changed reviewers: - henrik.lundin@webrtc.org
Hi, Sorry for the belated answer to your comments. Following Per's feedback, we now have the fake recording device implementations in a single file, still with dedicated classes. Much less boilerplate. Thanks! And I also moved the logic for the simulated analog gain when an AEC dump is used as input. I tested both with wav and AEC dumps, and it works as expected. Interesting to see that, with an AEC dump, the level undo op leads to having AGC suggesting the same mic levels logged in the AEC dump :) Alessio https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:180: fake_recording_device_->set_mic_level(msg.level()); On 2017/05/23 22:13:20, peah-webrtc wrote: > On 2017/05/23 13:56:41, AleBzk wrote: > > @Per: you asked to move this to the parent class and store the level from the > > AEC dump into a local variable. > > > > First of all, note that I'm not using local variables anymore but directly the > > fake recording device (which is a protected member of the parent class). Last > > time you reviewed this CL, you commented on an old PS using variables. > > > > Back to your main point. If we move this logic to the parent class, then the > > latter needs sth like "if (aec dump case)", which I don't like much because > for > > the AEC dump case we have a dedicated implementation of the abstract class. > > > > In this way instead, the parent class just does the stuff common to WAV and > > AECdump, whereas the additional features specific of each case are done in > > WavBasedSimulator and AecDumpBasedSimulator respectively. > > > > Finally, note that msg.level() has two different applications depending on > > whether the analog mic is simulated. To me, it has to be AecDumpBasedSimulator > > that knows what to change in the simulation. > > > > However, I understand that to me everything looks clear, but it could be > > different for the reader. So, if you still see this confusing I propose the > > following changes: > > - in the parent class AudioProcessingSimulator, I add "rtc::Optional<int> > > aec_dump_mic_level_" as protected member > > - AecDumpBasedSimulator always sets its value in there > > - AudioProcessingSimulator uses aec_dump_mic_level_ if available > > - since WavBasedSimulator doesn't set it, there's no need of "if (aec dump)" > > > > WDYT? > > What I don't like with this approach is that it gives the impression that the > fake_recording_device actually modifies the level in > audio_processing_simulator.cc even though it is specified not to simulate the > mic gain. Furthermore, in order to see this that this really is not the case I'd > need to dig fairly deep into fake_recording_device. > > I see your point about that the AudioProcessingSimulator code should not need to > know whether you are using a wav or an aecdump based simulation. It is > definitely valid but I don't think there is anyway around that in this case. The > latest patch actually also has such a dependency but in a hidden way as the > fake_recording_device knows what kind of simulation is running. I'd rather make > that more explicit. > > What about something similar to the following code: > > void AecDumpBasedSimulator::PrepareProcessStreamCall() { > ... > > level_ = rtc::Optional<int>(msg.level()); > ... > } > > > class AudioProcessingSimulator { > ... > protected: > rtc::Optional<int> level_; > ... > }; > > void AudioProcessingSimulator::ProcessStream(bool fixed_interface) { > // Optionally use the fake recording device to simulate analog gain. > RTC_DCHECK(fake_recording_device_); > if (settings_.simulate_mic_gain) { > if (fixed_interface) { > fake_recording_device_->SimulateAnalogGain(level_, &fwd_frame_); > } else { > fake_recording_device_->SimulateAnalogGain(level_, in_buf_.get()); > } > > ap_->gain_control()->set_stream_analog_level(fake_recording_device_->mic_level() > : level_)); > } > else { > // Use the level 100 as default if no other level has been externally > specified. > ap_->gain_control()->set_stream_analog_level(level_ ? * level_ : 100)); > } > ... > // Process the current audio frame. > ... > // Store the mic level suggested by AGC if required. > int new_level = ap_->gain_control()->stream_analog_level(); > if (settings_.simulate_mic_gain) { > fake_recording_device_->set_mic_level(new_level); > } > .. > } > > > > Done. https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_rec_device_identity.h (right): https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_rec_device_identity.h:11: #ifndef WEBRTC_MODULES_AUDIO_PROCESSING_TEST_FAKE_REC_DEVICE_IDENTITY_H_ On 2017/05/23 22:13:20, peah-webrtc wrote: > Sorry, but this is not what I meant. Please put these implementations in the > fake_recording_device.cc file instead. This approach adds a lot of boilerplate > code for something which is quite small. What I meant was that you instead of > the method pointers should use interfaces and only implement them locally. > > When you move them into separate files there is a lot of boilerplate code, about > 60 lines for each. > Previously each method for this was 5 lines long and with local interfaces, you > should have something fairly similar. > > I think that until we really put something substantial in these implementations > which show that they become so large as to warrant a separate file, we should > simply put the implementations inside the fake_recording_device.cc file. Done. https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/200001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:46: class FakeRecordingDevice { On 2017/05/23 22:13:20, peah-webrtc wrote: > I really think we should put as little implementation as possible into this > class now, i.e., make it as small and simple as possible. The reason for that is > that we don't really know how to simulate this. > > I'm open to having one or two simple default implementations just to show how it > is supposed to work but I don't think it makes sense to do advanced > implementations in a specific way. For instance, it is fairly clear to me that > the current approach won't be sufficient to simulate a microphone with a noise > suppressor (which we have on Macbooks), beamformers, or a microphone with any > type of memory in its clipping behavior. Until we can rule out that we don't > need such an implementation of the simulation I don't think we should have a > code design that enforces that, unless that code is very simple. > > Why not do this as simple as possible, and not more than that? It will be much > easier to discuss the implementation once we really know what is needed to > simulate the device. Done. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:130: fake_recording_device_->set_mic_level(*aec_dump_mic_level_); Line 130 overrides what is set in lines 145-147. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:132: } About lines 119-132: the AEC dump analog gain simulation logic has been moved here following Per's request. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:44: }; This is the class to be implemented for each simulated recording device. The implementations are below in the anon ns. Note that FakeRecordingDeviceWorker cannot be defined in the anon ns, because it's needed in fake_recording_device.h. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:57: void ModifySampleFloat(float* sample) override {} This is a recording device that does nothing. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:27: class FakeRecordingDeviceWorker; Forward declaration for the FakeRecordingDeviceWorker abstract class, which is defined in audio_processing_simulator.cc. The same file also contains FakeRecordingDeviceWorker implementations (in the anon ns). https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:34: // 255, FakeRecordingDevice.DeviceKind.LINEAR); I fixed this as follows: // FakeRecordingDevice fake_mic( // 255, FakeRecordingDevice.DeviceKind.LINEAR); Will upload in the next PS once I get your comments.
Hi, Thanks for the new patch! I have added some further comments. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:11: #include <algorithm> Why was algorithm and utility added to the includes? Are those required by the former code? https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:19: #include "webrtc/modules/audio_processing/test/fake_recording_device.h" This include is not needed, right? https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:89: fake_recording_device_(new FakeRecordingDevice( You don't need to create this dynamically. Since you have it in the initializer list, you can instead just initialized it without dynamic allocation. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:130: fake_recording_device_->set_mic_level(*aec_dump_mic_level_); As I commented on a previous patch, I definitely don't think that the fake_recording device should be involved when the mic gain is not simulated. If you avoid that, you can skip the set_mic_level method. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:138: fake_recording_device_->SimulateAnalogGain(&fwd_frame_); Why not just add the optional aec_dump_mic_level_ as an input to SimulateAnalogGain? That would eliminate the need for the lines 119-132. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:147: fake_recording_device_->mic_level())); This is something that I think complicates the code quite a bit. As I commented on a previous patch, I don't think that the fake recording device should at all be involved with the mic_level if the mic gain should not be simulated. As it is now, it seems from this line as if the fake_recording_device is choosing the mic level, instead of that the mic level in the aec_dump is passed. Furthermore, verifying that that is not the case is not that straightforward. Why not do as I proposed in a comment on a previous patch, and pass in *aec_dump_mic_level_ directly. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:163: // Store the mic level suggested by AGC if required. The comment "if required" seems out of place. Could you please clarify it, or rephrase it? https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:21: // Abstract class for the different fake recording devices. The scheme for the clipping and analog gain is quite intricate, but regardless of that I don't think it is general enough to scale. Therefore, I'd propose (as I previously suggested) to have a much simpler implementation now until the real simulation work of the recording device is started, as the implementation herein is currently only a placeholder functionality for this. My main concerns are the clipping. As it is now, that is hardcoded in the parent class but we have discussed clipping solutions which don't comply to the scheme that is hardcoded. How do you envision that to fit into the framework below? https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:69: float sample_f = static_cast<float>(*sample); You don't need to do the cast here. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:74: sample_f * static_cast<float>(mic_level_) / static_cast<float>( Same thing here, no casts are needed. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:74: sample_f * static_cast<float>(mic_level_) / static_cast<float>( There is a bug here I think. A float is passed to ClipSampleInt16 but since that parameter is an int16_t I guess the value could be saturated already when the parameter is passed. I thought the compiler would warn for that though. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:86: *sample * static_cast<float>(mic_level_) / static_cast<float>( Are the static casts really needed? Since sample is of float, and the conversion goes from left to right, that case should be automatic by the rules of C++ I think. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:91: *sample * static_cast<float>(mic_level_) / 255.0f); The cast is not needed. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:123: [this](float& x) { worker_->ModifySampleFloat(&x); }); Why not pass the whole vector into ModifySampleFloat? https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:134: [this](int16_t& x) { worker_->ModifySampleInt16(&x); }); Why not just pass the whole vector into ModifySampleInt16? Even though this is not realtime code, it still makes sense to write it in a fast way if possible, and I don't see a big gain in terms of readability for making separate calls on a sample basis. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:29: // Abstract class for simulating a microphone with analog gain. This is not an abstract class, right? Or did I get it wrong? https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/wav_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/wav_based_simulator.cc:16: #include <utility> Why is utility needed? https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/wav_based_simulator.cc:19: #include "webrtc/modules/audio_processing/test/fake_recording_device.h" This include is not needed. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/wav_based_simulator.cc:61: : AudioProcessingSimulator(settings) {} Is this caused by git cl format? Was the indendation wrong before?
Patchset #10 (id:240001) has been deleted
Patchset #10 (id:250001) has been deleted
Patchset #10 (id:270001) has been deleted
Thanks Per! Getting there :) Sorry, I had an issue while merging for which I had to delete a few patch sets. My revised code is in https://codereview.webrtc.org/2834643002/#ps290001 Alessio https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:11: #include <algorithm> On 2017/06/29 05:45:26, peah-webrtc wrote: > Why was algorithm and utility added to the includes? Are those required by the > former code? Good catch! Thanks! https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:19: #include "webrtc/modules/audio_processing/test/fake_recording_device.h" On 2017/06/29 05:45:26, peah-webrtc wrote: > This include is not needed, right? Yet another good catch :) https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:89: fake_recording_device_(new FakeRecordingDevice( On 2017/06/29 05:45:27, peah-webrtc wrote: > You don't need to create this dynamically. Since you have it in the initializer > list, you can instead just initialized it without dynamic allocation. Thanks. I recalled why I added the member as a unique pointer. I wanted forward declaration in the header file. But I guess the option you suggest is preferred in the end. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:130: fake_recording_device_->set_mic_level(*aec_dump_mic_level_); On 2017/06/29 05:45:26, peah-webrtc wrote: > As I commented on a previous patch, I definitely don't think that the > fake_recording device should be involved when the mic gain is not simulated. > > If you avoid that, you can skip the set_mic_level method. Thanks. Sorry for having kept this. It's up to us deciding if this could make sense. My view is that, when an AEC dump is used as input, AGC should behave as close as possible as in the recorded call. This can be done by informing AGC on the mic gain that was set during the call. But I also understand that, since no analog mic gain is simulated, there's not much to observe. I'll remove as you suggest. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:138: fake_recording_device_->SimulateAnalogGain(&fwd_frame_); On 2017/06/29 05:45:26, peah-webrtc wrote: > Why not just add the optional aec_dump_mic_level_ as an input to > SimulateAnalogGain? That would eliminate the need for the lines 119-132. I'd prefer as it is now, otherwise we have to add nested "if"s since we have 4 combinations for (fixed_interface, aec_dum_input && simulate_gain). Also, from a design perspective, either we pass all the parameters (i.e., undo mic level and mic level) to SimulateAnalogGain() or just the buffer to process. WDYT? https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:147: fake_recording_device_->mic_level())); On 2017/06/29 05:45:27, peah-webrtc wrote: > This is something that I think complicates the code quite a bit. As I commented > on a previous patch, I don't think that the fake recording device should at all > be involved with the mic_level if the mic gain should not be simulated. > > As it is now, it seems from this line as if the fake_recording_device is > choosing the mic level, instead of that the mic level in the aec_dump is passed. What happens during a call is that AGC suggests a gain but for many different reasons, the suggestion can be overridden (e.g., the user manually sets the mic gain via OS UI). I haven't verified if AGC makes use of the actual mic gain to suggest a new one or if it uses the sequence of suggested gains blindly. That's why I think we'd better to tell AGC the actual mic gain. I understand that you find this confusing, but this is an AGC API issue - and in fact we planned to eliminate the "set_stream_analog_level, ProcessStream, stream_analog_level" sequence of invocations (error prone). My suggestion is that until the AGC API remains as it is, we keep this call, maybe adding more details in the comment. WDYT? > Furthermore, verifying that that is not the case is not that straightforward. > > Why not do as I proposed in a comment on a previous patch, and pass in > *aec_dump_mic_level_ directly. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:163: // Store the mic level suggested by AGC if required. On 2017/06/29 05:45:27, peah-webrtc wrote: > The comment "if required" seems out of place. Could you please clarify it, or > rephrase it? Done. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:21: // Abstract class for the different fake recording devices. On 2017/06/29 05:45:27, peah-webrtc wrote: > The scheme for the clipping and analog gain is quite intricate, but regardless > of that I don't think it is general enough to scale. Therefore, I'd propose (as > I previously suggested) to have a much simpler implementation now until the real > simulation work of the recording device is started, as the implementation herein > is currently only a placeholder functionality for this. > > My main concerns are the clipping. As it is now, that is hardcoded in the parent > class but we have discussed clipping solutions which don't comply to the scheme > that is hardcoded. How do you envision that to fit into the framework below? Thanks for these concerns. First, to improve the clarity, I moved constants and the hard clipping functions in the anon namespace. It's good to have them somewhere to avoid code duplication. Then note that each implementation of FakeRecordingDeviceWorker can implement its clipping strategy - not obliged to use ClipSampleInt16/ClipSampleFloat. Hence, I find the current design extremely flexible and also compact. In fact, FakeRecordingDeviceLinear::ModifySampleFloat() is just 5 lines of code excl. comments. Also note that we need ModifySampleInt16/ModifySampleFloat mainly because of the client code. Another way is to only implement the float interface and scale in [-2^15, 2^15 - 1] afterwards for int16. With that being said, if you find that having moved code to the anon namespace is still insufficient, could you then be more specific when you say that we could have a much simpler implementation? https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:69: float sample_f = static_cast<float>(*sample); On 2017/06/29 05:45:27, peah-webrtc wrote: > You don't need to do the cast here. Done. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:74: sample_f * static_cast<float>(mic_level_) / static_cast<float>( On 2017/06/29 05:45:27, peah-webrtc wrote: > Same thing here, no casts are needed. Done. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:74: sample_f * static_cast<float>(mic_level_) / static_cast<float>( On 2017/06/29 05:45:27, peah-webrtc wrote: > There is a bug here I think. A float is passed to ClipSampleInt16 but since > that parameter is an int16_t I guess the value could be saturated already when > the parameter is passed. I thought the compiler would warn for that though. Right, thanks. Using goma I missed the compiler warnings. I tried the code, clipping already occurs just by casting. However, to avoid issues, I changed ClipSampleInt16 into ClipSampleFloatToInt16, with the latter accepting a float as input (instead of int16). https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:86: *sample * static_cast<float>(mic_level_) / static_cast<float>( On 2017/06/29 05:45:27, peah-webrtc wrote: > Are the static casts really needed? Since sample is of float, and the conversion > goes from left to right, that case should be automatic by the rules of C++ I > think. Done. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:91: *sample * static_cast<float>(mic_level_) / 255.0f); On 2017/06/29 05:45:27, peah-webrtc wrote: > The cast is not needed. Done. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:123: [this](float& x) { worker_->ModifySampleFloat(&x); }); On 2017/06/29 05:45:27, peah-webrtc wrote: > Why not pass the whole vector into ModifySampleFloat? Just wanted to decouple FakeRecordingDeviceWorker implementations from a buffer type (to avoid code duplication across the implementations). However, the overhead of a lambda invoked for each sample is not that nice either. Thanks to your comments I realized that it's better to fully delegate FakeRecordingDeviceWorker implementations, since a simulated device may have memory and hence require a different way to iterate through the samples - depending on the specific buffer type. For instance, AudioFrame has interleaved samples and therefore care must be taken if we want to simulate soft-clipping with memory. WDYT? https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:134: [this](int16_t& x) { worker_->ModifySampleInt16(&x); }); On 2017/06/29 05:45:27, peah-webrtc wrote: > Why not just pass the whole vector into ModifySampleInt16? Even though this is > not realtime code, it still makes sense to write it in a fast way if possible, > and I don't see a big gain in terms of readability for making separate calls on > a sample basis. Addressed (see previous comment). Also good to correctly implement the iteration over channels. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:29: // Abstract class for simulating a microphone with analog gain. On 2017/06/29 05:45:27, peah-webrtc wrote: > This is not an abstract class, right? Or did I get it wrong? Sorry, this comment applies to the previous PS, I forgot to update. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/wav_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/wav_based_simulator.cc:16: #include <utility> On 2017/06/29 05:45:27, peah-webrtc wrote: > Why is utility needed? Good catch, thanks! https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/wav_based_simulator.cc:19: #include "webrtc/modules/audio_processing/test/fake_recording_device.h" On 2017/06/29 05:45:27, peah-webrtc wrote: > This include is not needed. Done. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/wav_based_simulator.cc:61: : AudioProcessingSimulator(settings) {} On 2017/06/29 05:45:27, peah-webrtc wrote: > Is this caused by git cl format? Was the indendation wrong before? Right. https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/BUILD.gn (right): https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/BUILD.gn:602: "aec_dump:aec_dump_unittests", Unrelated (merge) https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/BUILD.gn:764: "../../base:rtc_task_queue", Unrelated (merge) https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/BUILD.gn:770: "aec_dump:aec_dump_impl", Unrelated (merge) https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:71: << "Simulating analog mic gain using AEC dump as input " Just git cl format https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:24: #include "webrtc/modules/audio_processing/aec_dump/aec_dump_factory.h" Unrelated (merge) https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:95: worker_queue_("file_writer_task_queue") { last line is unrelated (merge) https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:296: ap_->DetachAecDump(); Unrelated (merge) https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:437: *settings_.aec_dump_output_filename, -1, &worker_queue_)); Unrelated (merge) https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.h (right): https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.h:22: #include "webrtc/base/task_queue.h" Unrelated (merge) https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.h:187: rtc::TaskQueue worker_queue_; Unrelated (merge) https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc (right): https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:55: rtc::MakeUnique<ChannelBuffer<float>>(data[0].size(), data.size()); Just git cl format https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:63: const ChannelBuffer<float>* curr) { Just git cl format https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:77: } Just git cl format https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:85: const ChannelBuffer<float>* dst) { Just git cl format https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:100: } Just git cl format https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:134: RTC_DCHECK_NE(buff->channels()[kC][kS], kTestMultiChannelSamples[kC][kS]); Just git cl format https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:188: fake_rec_device_kind); Just git cl format https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:225: fake_rec_device_kind); Just git cl format
Hi, Thanks for the new patch! I have some more comments! https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:138: fake_recording_device_->SimulateAnalogGain(&fwd_frame_); On 2017/06/29 11:43:35, AleBzk wrote: > On 2017/06/29 05:45:26, peah-webrtc wrote: > > Why not just add the optional aec_dump_mic_level_ as an input to > > SimulateAnalogGain? That would eliminate the need for the lines 119-132. > > I'd prefer as it is now, otherwise we have to add nested "if"s since we have 4 > combinations for (fixed_interface, aec_dum_input && simulate_gain). > > Also, from a design perspective, either we pass all the parameters (i.e., undo > mic level and mic level) to SimulateAnalogGain() or just the buffer to process. > > WDYT? I would personally prefer to have it as one single call as it gives shorter code and I don't think that would complicate anything.. But lets see how it turns out with the changes done in response to the comment above. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:147: fake_recording_device_->mic_level())); On 2017/06/29 11:43:35, AleBzk wrote: > On 2017/06/29 05:45:27, peah-webrtc wrote: > > This is something that I think complicates the code quite a bit. As I > commented > > on a previous patch, I don't think that the fake recording device should at > all > > be involved with the mic_level if the mic gain should not be simulated. > > > > As it is now, it seems from this line as if the fake_recording_device is > > choosing the mic level, instead of that the mic level in the aec_dump is > passed. > > What happens during a call is that AGC suggests a gain but for many different > reasons, the suggestion can be overridden (e.g., the user manually sets the mic > gain via OS UI). I haven't verified if AGC makes use of the actual mic gain to > suggest a new one or if it uses the sequence of suggested gains blindly. That's > why I think we'd better to tell AGC the actual mic gain. > > I understand that you find this confusing, but this is an AGC API issue - and in > fact we planned to eliminate the "set_stream_analog_level, ProcessStream, > stream_analog_level" sequence of invocations (error prone). My suggestion is > that until the AGC API remains as it is, we keep this call, maybe adding more > details in the comment. > > WDYT? > > > Furthermore, verifying that that is not the case is not that straightforward. > > > > Why not do as I proposed in a comment on a previous patch, and pass in > > *aec_dump_mic_level_ directly. > You definitely need to pass the mic gain to the fake recording device when it is used, as that is how the AGC operates. The concern I have is that you involve the fake recording device even when it is not used, and to me that is confusing. What I meant with my comment is that I want you to directly pass the gain that is stored in the recording (or rather copied from the recording) . As it is now, the gain is copied into fakeaudio device and then passed from that into the agc which gives the impression that the fakeaudiodevice is involved, which it is not. This change does not require any change in the AGC API. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:21: // Abstract class for the different fake recording devices. On 2017/06/29 11:43:35, AleBzk wrote: > On 2017/06/29 05:45:27, peah-webrtc wrote: > > The scheme for the clipping and analog gain is quite intricate, but regardless > > of that I don't think it is general enough to scale. Therefore, I'd propose > (as > > I previously suggested) to have a much simpler implementation now until the > real > > simulation work of the recording device is started, as the implementation > herein > > is currently only a placeholder functionality for this. > > > > My main concerns are the clipping. As it is now, that is hardcoded in the > parent > > class but we have discussed clipping solutions which don't comply to the > scheme > > that is hardcoded. How do you envision that to fit into the framework below? > > Thanks for these concerns. > > First, to improve the clarity, I moved constants and the hard clipping functions > in the anon namespace. > It's good to have them somewhere to avoid code duplication. > > Then note that each implementation of FakeRecordingDeviceWorker can implement > its clipping strategy - not obliged to use ClipSampleInt16/ClipSampleFloat. > Hence, I find the current design extremely flexible and also compact. In fact, > FakeRecordingDeviceLinear::ModifySampleFloat() is just 5 lines of code excl. > comments. > > Also note that we need ModifySampleInt16/ModifySampleFloat mainly because of the > client code. > Another way is to only implement the float interface and scale in [-2^15, 2^15 - > 1] afterwards for int16. > > With that being said, if you find that having moved code to the anon namespace > is still insufficient, could you then be more specific when you say that we > could have a much simpler implementation? With simpler implementation, I rather mean less advanced, since it is yet to be used and since the work on actually simulating this has not started, which means it will anyway likely change. For instance the following would be sufficient for the floating point, for (auto & x : float_sample_vector) { x = std::min(1.f, std::max(-1.f, mic_level/undo_mic_level)); } https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:123: [this](float& x) { worker_->ModifySampleFloat(&x); }); On 2017/06/29 11:43:35, AleBzk wrote: > On 2017/06/29 05:45:27, peah-webrtc wrote: > > Why not pass the whole vector into ModifySampleFloat? > > Just wanted to decouple FakeRecordingDeviceWorker implementations from a buffer > type (to avoid code duplication across the implementations). > However, the overhead of a lambda invoked for each sample is not that nice > either. > > Thanks to your comments I realized that it's better to fully delegate > FakeRecordingDeviceWorker implementations, since a simulated device may have > memory and hence require a different way to iterate through the samples - > depending on the specific buffer type. For instance, AudioFrame has interleaved > samples and therefore care must be taken if we want to simulate soft-clipping > with memory. > > WDYT? I'd say do this as simple as possible now, as you'll anyway need to rewrite this once the work on simulating the analog microphone starts. https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:72: << "(the unmodified mic gain level will be virtually restored)"; This comment is not really fully correct, right? It depends on what is done. If another mic gain is specified, that will be implemented. Why not just used the first log line. That should be sufficient for the user. https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:92: ? static_cast<FakeRecordingDevice::DeviceKind>( I think you should move the selection of DeviceKind into the fake_recording_device. Having it here just constitutes a cast to something specified elsewhere and I think the mapping between the parameter value and the actual DeviceKind would be more clear if the DeviceKind and the mapping was done at the same place. Please check the constructor for more comments. https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:120: if (settings_.aec_dump_input_filename && settings_.simulate_mic_gain) { This is better, but I'd rephrase it as if (settings_.simulate_mic_gain) { if (settings_.aec_dump_input_filename) { RTC_DCHECK(aec_dump_mic_level_); fake_recording_device_.set_undo_mic_level(aec_dump_mic_level_); } if (fixed_interface) { fake_recording_device_.SimulateAnalogGain(&fwd_frame_); } else { fake_recording_device_.SimulateAnalogGain(in_buf_.get()); } } No need to have two different if-s on simulate_mic_gain https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:140: ap_->gain_control()->set_stream_analog_level( No, I don't think this is ok. The fake recording device should not be involved if the microphone gain is not being simulated. Furthermore, I doubt that this is correct now, as the previous call to set_mic_level was removed. https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:112: FakeRecordingDevice::FakeRecordingDevice(int initial_mic_level, DeviceKind kind) Having seen the usage of the constructor, I definitely think the comment about the cast in the construction does not make sense. As it is now, the constructor first converts the simulated_mic_kind into a DeviceKind using a static_cast. fake_recording_device_(settings.initial_mic_level, settings_.simulate_mic_gain ? static_cast<FakeRecordingDevice::DeviceKind>( *settings.simulated_mic_kind) : kDefaultFakeRecDeviceKind), Then the resulting DeviceKind variable is only used in the constructor switch statement. I think you should simply pass settings.simulated_mic_kind into the constructor and make the switch statement on that. That would be more direct, and would require less code. https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:62: rtc::Optional<int> undo_mic_level() const { return undo_mic_level_; } This getter is never used. Please remove.
Hi Per, Comments addressed, two major points that might still be open: 1. Simplify fake recording device as much as possible: I don't think it's worth simplifying further the fake rec device code because otherwise we need to add code to adapt the input data format to ArrayView (or similar) 2. Fake recording device only used when simulating the mic gain: done except for passing the undo mic level to SimulateAnalogGain, I'd prefer to keep that as a member to keep the client code as shortest as possible In case you still see parts to be improved, let's discuss further offline since this CL has been open for several weeks and we haven't continuously worked on it. Alessio https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:21: // Abstract class for the different fake recording devices. On 2017/06/29 22:03:59, peah-webrtc(OOOtoAug12) wrote: > On 2017/06/29 11:43:35, AleBzk wrote: > > On 2017/06/29 05:45:27, peah-webrtc wrote: > > > The scheme for the clipping and analog gain is quite intricate, but > regardless > > > of that I don't think it is general enough to scale. Therefore, I'd propose > > (as > > > I previously suggested) to have a much simpler implementation now until the > > real > > > simulation work of the recording device is started, as the implementation > > herein > > > is currently only a placeholder functionality for this. > > > > > > My main concerns are the clipping. As it is now, that is hardcoded in the > > parent > > > class but we have discussed clipping solutions which don't comply to the > > scheme > > > that is hardcoded. How do you envision that to fit into the framework below? > > > > > Thanks for these concerns. > > > > First, to improve the clarity, I moved constants and the hard clipping > functions > > in the anon namespace. > > It's good to have them somewhere to avoid code duplication. > > > > Then note that each implementation of FakeRecordingDeviceWorker can implement > > its clipping strategy - not obliged to use ClipSampleInt16/ClipSampleFloat. > > Hence, I find the current design extremely flexible and also compact. In fact, > > FakeRecordingDeviceLinear::ModifySampleFloat() is just 5 lines of code excl. > > comments. > > > > Also note that we need ModifySampleInt16/ModifySampleFloat mainly because of > the > > client code. > > Another way is to only implement the float interface and scale in [-2^15, 2^15 > - > > 1] afterwards for int16. > > > > With that being said, if you find that having moved code to the anon namespace > > is still insufficient, could you then be more specific when you say that we > > could have a much simpler implementation? > > With simpler implementation, I rather mean less advanced, since it is yet to be > used and since the work on actually simulating this has not started, which means > it will anyway likely change. > > For instance the following would be sufficient for the floating point, > for (auto & x : float_sample_vector) { > x = std::min(1.f, std::max(-1.f, mic_level/undo_mic_level)); > } > > > > > I removed the hard clipping functions from the anon namespace to avoid that the reader thinks that hard-clipping is the only possible choice. In the future, we might add them back if needed. Regarding the sample code you added in your last comment. Note that we don't have a simple vector because the samples are encoded either in a ChannelBuffer or an AudioFrame instance. As I pointed out in a previous comment, in order to work with vectors we could for instance use ArrayView (or similar). This would however end up to adding boilerplate and duplicated code, because we need to convert from ChannelBuffer/AudioFrame to ArrayView while supporting multiple channels. This is why, for the time being, I find the current code a good trade-off. https://codereview.webrtc.org/2834643002/diff/220001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:123: [this](float& x) { worker_->ModifySampleFloat(&x); }); On 2017/06/29 22:03:59, peah-webrtc(OOOtoAug12) wrote: > On 2017/06/29 11:43:35, AleBzk wrote: > > On 2017/06/29 05:45:27, peah-webrtc wrote: > > > Why not pass the whole vector into ModifySampleFloat? > > > > Just wanted to decouple FakeRecordingDeviceWorker implementations from a > buffer > > type (to avoid code duplication across the implementations). > > However, the overhead of a lambda invoked for each sample is not that nice > > either. > > > > Thanks to your comments I realized that it's better to fully delegate > > FakeRecordingDeviceWorker implementations, since a simulated device may have > > memory and hence require a different way to iterate through the samples - > > depending on the specific buffer type. For instance, AudioFrame has > interleaved > > samples and therefore care must be taken if we want to simulate soft-clipping > > with memory. > > > > WDYT? > > I'd say do this as simple as possible now, as you'll anyway need to rewrite this > once the work on simulating the analog microphone starts. Yup. Not sure if you've seen the latest PS before you replied. I think I addressed your comment - and find the new code the simplest option for the time being. https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/aec_dump_based_simulator.cc:72: << "(the unmodified mic gain level will be virtually restored)"; On 2017/06/29 22:03:59, peah-webrtc(OOOtoAug12) wrote: > This comment is not really fully correct, right? It depends on what is done. If > another mic gain is specified, that will be implemented. > > Why not just used the first log line. That should be sufficient for the user. Done. https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:92: ? static_cast<FakeRecordingDevice::DeviceKind>( On 2017/06/29 22:04:00, peah-webrtc(OOOtoAug12) wrote: > I think you should move the selection of DeviceKind into the > fake_recording_device. > Having it here just constitutes a cast to something specified elsewhere and I > think the mapping between the parameter value and the actual DeviceKind would be > more clear if the DeviceKind and the mapping was done at the same place. > > Please check the constructor for more comments. Done. https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:120: if (settings_.aec_dump_input_filename && settings_.simulate_mic_gain) { On 2017/06/29 22:04:00, peah-webrtc(OOOtoAug12) wrote: > This is better, but I'd rephrase it as > > if (settings_.simulate_mic_gain) { > if (settings_.aec_dump_input_filename) { > RTC_DCHECK(aec_dump_mic_level_); > fake_recording_device_.set_undo_mic_level(aec_dump_mic_level_); > } > > if (fixed_interface) { > fake_recording_device_.SimulateAnalogGain(&fwd_frame_); > } else { > fake_recording_device_.SimulateAnalogGain(in_buf_.get()); > } > } > > No need to have two different if-s on simulate_mic_gain Done. https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:140: ap_->gain_control()->set_stream_analog_level( On 2017/06/29 22:04:00, peah-webrtc(OOOtoAug12) wrote: > No, I don't think this is ok. The fake recording device should not be involved > if the microphone gain is not being simulated. > The code below is executed only when the mic gain is simulated. So it should be ok. RTC_CHECK_EQ(AudioProcessing::kNoError, ap_->gain_control()->set_stream_analog_level( fake_recording_device_.mic_level())); > Furthermore, I doubt that this is correct now, as the previous call to > set_mic_level was removed. It hasn't been removed; look line 151 on the right (latest patch set). https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:35: } Removed since these functions may confuse the reader at this stage. https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/290001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:62: rtc::Optional<int> undo_mic_level() const { return undo_mic_level_; } On 2017/06/29 22:04:00, peah-webrtc(OOOtoAug12) wrote: > This getter is never used. Please remove. Done. https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:155: } As you pointed out, fake_recording_device should only be used when the mic gain is simulated. https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:50: FakeRecordingDevice(int initial_mic_level, int device_kind); As you requested, this simplifies the client code - no need to cast int to enum.
Hi, Thanks for the patch. Here comes some more comments to be coupled with where discussed offline. https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:118: // When the analog gain is sumulated and an AEC dump is used as input, set typo: sumulated https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:133: ap_->gain_control()->set_stream_analog_level( As discussed offline, set_stream_analog_level must be called even when the mic gain is not simulated. The same goes for stream_analog_level https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:25: constexpr float kFloatSampleMin = -1.0f; The range for the float values inside APM is [-32768.f:32767.f] https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:42: const rtc::Optional<int>& undo_mic_level_; As discussed offline, please remove this as a referecence, and instead only use the undo_mic_level_ located here. https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:60: // mic levels range [0, 255] to [0.0, 1.0]. Since you specify the allowed range for the mic levels here, I think you should add DCHECKS in the setter for that. https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:67: void ModifyBufferInt16(AudioFrame* buffer) override { Since you add implementations within the class definition, I think the class definition would benefit in terms of readability if you add an empty line separating the methods that have implementations. https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:74: if (undo_mic_level_ && *undo_mic_level_ > 0) { Regarding *undo_mic_level_ > 0, please see comment below. https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:92: if (undo_mic_level_ && *undo_mic_level_ > 0) { why do you need to check for *undo_mic_level_ > 0 ? If that is not a valid mic level value, I'd rather DCHECK on it in the setter. However, what I guess you want to do is to handle it in a specific manner. As it is now, it is treated in the same manner as if undo_mic_level is not set.
Hi Per, As per our offline discussion, I've done the following changes: - float range from [-1, 1] to [-32768, 32767] - FakeRecordingDevice mic level members moved to FakeRecordingDeviceWorker to remove unneeded references and improve readability - AGC get-process-set sequence always called to not break the AGC API - DCHECK to avoid 0 undo mic level + TODO with brief explanation I also merged, therefore the PS includes "webrtc/base" -> "webrtc/rtc_base" replacements. Cheers, Alessio https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:118: // When the analog gain is sumulated and an AEC dump is used as input, set On 2017/08/18 04:27:00, peah-webrtc wrote: > typo: sumulated Done. https://codereview.webrtc.org/2834643002/diff/310001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:133: ap_->gain_control()->set_stream_analog_level( On 2017/08/18 04:27:00, peah-webrtc wrote: > As discussed offline, set_stream_analog_level must be called even when the mic > gain is not simulated. The same goes for stream_analog_level Done. https://codereview.webrtc.org/2834643002/diff/330001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audioproc_float.cc (right): https://codereview.webrtc.org/2834643002/diff/330001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audioproc_float.cc:170: "Specify which microphone kind to use for microphone simulation"); Probably due to a merge, I lost the changes in this file - which have already been checked (it's just the additional params to simulate the analog gain). https://codereview.webrtc.org/2834643002/diff/330001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/330001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:26: constexpr float kFloatSampleMax = 32767.0f; The range was wrong. https://codereview.webrtc.org/2834643002/diff/330001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:152: } As discussed offline with Per, FakeRecordingDevice get/set calls are now forwarded to the FakeRecordingDeviceWorker instance. https://codereview.webrtc.org/2834643002/diff/330001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/330001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:55: void SetUndoMicLevel(const rtc::Optional<int> level); Now the get/set calls are forwarded to FakeRecordingDeviceWorker. That's why I switched to CamelCase.
Awesome! Thanks for the update! This is getting closer, but I don't think the analog AGC simulation works still as before when running on an aecdump recording and the fake microphone is not active. Please have a look and do some tests on audioproc_f in order to verify that the code in the CL is bitexact for that case compared to how it was before. https://codereview.webrtc.org/2834643002/diff/350001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/350001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:137: ap_->gain_control()->set_stream_analog_level(analog_mic_level_)); Better, but this does still not work. if settings_.simulate_mic_gain is false, aec_dump_mic_level_ is never passed to set_stream_analog_level() since analog_mic_level_ is never set. I would prefer a scheme with an explicit set_stream_analog_level(aec_dump_mic_level_) that does not involve any unnessessary intermediate variable when aecdump recordings are processed without any simulated microphone gain, similarly to how the code constructs that I proposed earlier does. Also, please do some informal and exploratory tests (using audioproc_f) in order to ensure that the new code really works as the former one when used for simulation based on aecdum precordings. That would catch these kinds of errors. https://codereview.webrtc.org/2834643002/diff/350001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/350001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:35: // fake_mic.set_mic_level(170); Please update the comments to comply with the api changes. https://codereview.webrtc.org/2834643002/diff/350001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc (right): https://codereview.webrtc.org/2834643002/diff/350001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:226: Please remove empty line after comment.
Hi Per, Thanks for having caught a bug and my apologies for that. I fixed the problem and tested audioproc_f. All the four cases work - i.e., AECdump/wav and simulated/non-simulated analog mic gain. Alessio https://codereview.webrtc.org/2834643002/diff/350001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/350001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:137: ap_->gain_control()->set_stream_analog_level(analog_mic_level_)); On 2017/08/18 08:54:29, peah-webrtc wrote: > Better, but this does still not work. > if settings_.simulate_mic_gain is false, aec_dump_mic_level_ is never passed to > set_stream_analog_level() since analog_mic_level_ is never set. > > I would prefer a scheme with an explicit > set_stream_analog_level(aec_dump_mic_level_) that does not involve any > unnessessary intermediate variable when aecdump recordings are processed without > any simulated microphone gain, similarly to how the code constructs that I > proposed earlier does. > > Also, please do some informal and exploratory tests (using audioproc_f) in order > to ensure that the new code really works as the former one when used for > simulation based on aecdum precordings. That would catch these kinds of errors. > Great catch. Sorry for the oversight. https://codereview.webrtc.org/2834643002/diff/350001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.h (right): https://codereview.webrtc.org/2834643002/diff/350001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.h:35: // fake_mic.set_mic_level(170); On 2017/08/18 08:54:29, peah-webrtc wrote: > Please update the comments to comply with the api changes. Done. https://codereview.webrtc.org/2834643002/diff/350001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc (right): https://codereview.webrtc.org/2834643002/diff/350001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:226: On 2017/08/18 08:54:29, peah-webrtc wrote: > Please remove empty line after comment. Done.
Patchset #15 (id:390001) has been deleted
Thanks for the patch! I think it looks great! I have some minor further comments though. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:148: ap_->gain_control()->set_stream_analog_level(analog_mic_level_)); What about bundling line 148 with 143 using a conditional as ap_->gain_control()->set_stream_analog_level(settings_.aec_dump_input_filename ? *aec_dump_mic_level_:analog_mic_level_)) https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.h (right): https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.h:185: Please remove the empty line on 185 https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:74: RTC_DCHECK_LE(number_of_samples, AudioFrame::kMaxDataSizeSamples); Is this DCHECK really needed? since both samples_per_channel and num_channels are members of buffer (which is of type AudioFrame), I think there is no need to check that they fulfill the requirements of AudioFrame (AudioFrame::kMaxDataSizeSamples)) https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:77: const float sample_f = data[i]; sample_f -> sample. No need to specify the type in the name. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:77: const float sample_f = data[i]; Why store data[i] in a local variable? It should be sufficient to use data[i] directly instead of using sample_f. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:134: RTC_DCHECK(worker_); This should probably be a CHECK, since it is test code. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:141: LOG(LS_INFO) << "simulate mic level update: " << level; simulate -> Simulate https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:155: RTC_DCHECK(worker_); This should probably be a CHECK, since it is test code. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:160: RTC_DCHECK(worker_); This should probably be a CHECK, since it is test code. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc (right): https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:211: std::unique_ptr<const ChannelBuffer<float>> buff_orig = auto as below?
Hi Per, Thanks for your comments. I moved this CL to Gerrit and all the comments are addressed in there (see https://webrtc-review.googlesource.com/c/src/+/2685). Alessio https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.cc (right): https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.cc:148: ap_->gain_control()->set_stream_analog_level(analog_mic_level_)); On 2017/09/15 09:36:20, peah-webrtc wrote: > What about bundling line 148 with 143 using a conditional as > ap_->gain_control()->set_stream_analog_level(settings_.aec_dump_input_filename ? > *aec_dump_mic_level_:analog_mic_level_)) Done. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/audio_processing_simulator.h (right): https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/audio_processing_simulator.h:185: On 2017/09/15 09:36:20, peah-webrtc wrote: > Please remove the empty line on 185 Done. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device.cc (right): https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:74: RTC_DCHECK_LE(number_of_samples, AudioFrame::kMaxDataSizeSamples); On 2017/09/15 09:36:20, peah-webrtc wrote: > Is this DCHECK really needed? > since both samples_per_channel and num_channels are members of buffer (which is > of type AudioFrame), I think there is no need to check that they fulfill the > requirements of AudioFrame (AudioFrame::kMaxDataSizeSamples)) Done. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:77: const float sample_f = data[i]; On 2017/09/15 09:36:20, peah-webrtc wrote: > Why store data[i] in a local variable? It should be sufficient to use data[i] > directly instead of using sample_f. Done. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:77: const float sample_f = data[i]; On 2017/09/15 09:36:21, peah-webrtc wrote: > sample_f -> sample. No need to specify the type in the name. Acknowledged. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:134: RTC_DCHECK(worker_); On 2017/09/15 09:36:20, peah-webrtc wrote: > This should probably be a CHECK, since it is test code. Fixed this and all the other DCHECKs. Thanks. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:141: LOG(LS_INFO) << "simulate mic level update: " << level; On 2017/09/15 09:36:21, peah-webrtc wrote: > simulate -> Simulate Done. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:155: RTC_DCHECK(worker_); On 2017/09/15 09:36:20, peah-webrtc wrote: > This should probably be a CHECK, since it is test code. Done. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device.cc:160: RTC_DCHECK(worker_); On 2017/09/15 09:36:21, peah-webrtc wrote: > This should probably be a CHECK, since it is test code. Done. https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... File webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc (right): https://codereview.webrtc.org/2834643002/diff/370001/webrtc/modules/audio_pro... webrtc/modules/audio_processing/test/fake_recording_device_unittest.cc:211: std::unique_ptr<const ChannelBuffer<float>> buff_orig = On 2017/09/15 09:36:21, peah-webrtc wrote: > auto as below? I want a pointer to a const ChannelBuffer, and const auto would instead return a const std::unique_ptr. |